From cd670ef3a3f110a2db0c1453857ab0835cdc6fbf Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Thu, 21 Nov 2024 20:42:53 +0000 Subject: [PATCH] build based on 71f3416 --- dev/.documenter-siteinfo.json | 2 +- dev/about/index.html | 2 +- dev/changelog/index.html | 2 +- dev/contributing/index.html | 2 +- dev/extensions/index.html | 14 +-- dev/helpers/checks/index.html | 6 +- dev/helpers/exports/index.html | 2 +- dev/index.html | 4 +- dev/notation/index.html | 2 +- dev/plans/debug/index.html | 2 +- dev/plans/index.html | 6 +- dev/plans/objective/index.html | 104 +++++++++--------- dev/plans/problem/index.html | 4 +- dev/plans/record/index.html | 12 +- dev/plans/state/index.html | 2 +- dev/plans/stepsize/index.html | 30 ++--- dev/plans/stopping_criteria/index.html | 16 +-- dev/references/index.html | 2 +- dev/search_index.js | 2 +- dev/solvers/ChambollePock/index.html | 10 +- dev/solvers/DouglasRachford/index.html | 12 +- dev/solvers/FrankWolfe/index.html | 4 +- dev/solvers/LevenbergMarquardt/index.html | 4 +- dev/solvers/NelderMead/index.html | 6 +- .../index.html | 10 +- .../alternating_gradient_descent/index.html | 6 +- .../augmented_Lagrangian_method/index.html | 8 +- dev/solvers/cma_es/index.html | 6 +- .../conjugate_gradient_descent/index.html | 24 ++-- dev/solvers/conjugate_residual/index.html | 4 +- dev/solvers/convex_bundle_method/index.html | 8 +- dev/solvers/cyclic_proximal_point/index.html | 4 +- dev/solvers/difference_of_convex/index.html | 16 +-- dev/solvers/exact_penalty_method/index.html | 10 +- dev/solvers/gradient_descent/index.html | 10 +- dev/solvers/index.html | 8 +- dev/solvers/interior_point_Newton/index.html | 22 ++-- dev/solvers/particle_swarm/index.html | 4 +- .../primal_dual_semismooth_Newton/index.html | 2 +- dev/solvers/proximal_bundle_method/index.html | 8 +- dev/solvers/proximal_point/index.html | 4 +- dev/solvers/quasi_Newton/index.html | 26 ++--- .../stochastic_gradient_descent/index.html | 6 +- dev/solvers/subgradient/index.html | 4 +- .../index.html | 4 +- dev/solvers/trust_regions/index.html | 8 +- .../AutomaticDifferentiation/index.html | 2 +- .../ConstrainedOptimization/index.html | 12 +- dev/tutorials/CountAndCache/index.html | 10 +- dev/tutorials/EmbeddingObjectives/index.html | 2 +- .../figure-commonmark/cell-12-output-1.svg | 48 ++++---- .../figure-commonmark/cell-13-output-1.svg | 60 +++++----- .../figure-commonmark/cell-8-output-1.svg | 48 ++++---- .../figure-commonmark/cell-9-output-1.svg | 60 +++++----- dev/tutorials/GeodesicRegression/index.html | 2 +- dev/tutorials/HowToDebug/index.html | 2 +- dev/tutorials/HowToRecord/index.html | 2 +- dev/tutorials/ImplementASolver/index.html | 2 +- dev/tutorials/ImplementOwnManifold/index.html | 2 +- dev/tutorials/InplaceGradient/index.html | 2 +- dev/tutorials/Optimize/index.html | 2 +- .../figure-commonmark/cell-23-output-1.svg | 52 ++++----- .../StochasticGradientDescent/index.html | 72 ++++++------ 63 files changed, 417 insertions(+), 417 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 7b5b267db7..f6e58c9b65 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-21T20:41:42","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-21T20:42:34","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/dev/about/index.html b/dev/about/index.html index f6f551f694..20e4226dbd 100644 --- a/dev/about/index.html +++ b/dev/about/index.html @@ -1,2 +1,2 @@ -About · Manopt.jl

About

Manopt.jl inherited its name from Manopt, a Matlab toolbox for optimization on manifolds. This Julia package was started and is currently maintained by Ronny Bergmann.

Contributors

Thanks to the following contributors to Manopt.jl:

as well as various contributors providing small extensions, finding small bugs and mistakes and fixing them by opening PRs. Thanks to all of you.

If you want to contribute a manifold or algorithm or have any questions, visit the GitHub repository to clone/fork the repository or open an issue.

Work using Manopt.jl

  • ExponentialFamilyProjection.jl package uses Manopt.jl to project arbitrary functions onto the closest exponential family distributions. The package also integrates with RxInfer.jl to enable Bayesian inference in a larger set of probabilistic models.
  • Caesar.jl within non-Gaussian factor graph inference algorithms

Is a package missing? Open an issue! It would be great to collect anything and anyone using Manopt.jl

Further packages

Manopt.jl belongs to the Manopt family:

but there are also more packages providing tools on manifolds in other languages

+About · Manopt.jl

About

Manopt.jl inherited its name from Manopt, a Matlab toolbox for optimization on manifolds. This Julia package was started and is currently maintained by Ronny Bergmann.

Contributors

Thanks to the following contributors to Manopt.jl:

as well as various contributors providing small extensions, finding small bugs and mistakes and fixing them by opening PRs. Thanks to all of you.

If you want to contribute a manifold or algorithm or have any questions, visit the GitHub repository to clone/fork the repository or open an issue.

Work using Manopt.jl

  • ExponentialFamilyProjection.jl package uses Manopt.jl to project arbitrary functions onto the closest exponential family distributions. The package also integrates with RxInfer.jl to enable Bayesian inference in a larger set of probabilistic models.
  • Caesar.jl within non-Gaussian factor graph inference algorithms

Is a package missing? Open an issue! It would be great to collect anything and anyone using Manopt.jl

Further packages

Manopt.jl belongs to the Manopt family:

but there are also more packages providing tools on manifolds in other languages

diff --git a/dev/changelog/index.html b/dev/changelog/index.html index 46237de5e7..9df2de9134 100644 --- a/dev/changelog/index.html +++ b/dev/changelog/index.html @@ -1,2 +1,2 @@ -Changelog · Manopt.jl

Changelog

All notable Changes to the Julia package Manopt.jl will be documented in this file. The file was started with Version 0.4.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

[0.5.4] - unreleased

Added

  • An automated detection whether the tutorials are present if not an also no quarto run is done, an automated --exlcude-tutorials option is added.

[0.5.3] – October 18, 2024

Added

  • StopWhenChangeLess, StopWhenGradientChangeLess and StopWhenGradientLess can now use the new idea (ManifoldsBase.jl 0.15.18) of different outer norms on manifolds with components like power and product manifolds and all others that support this from the Manifolds.jl Library, like Euclidean

Changed

  • stabilize max_Stepzise to also work when injectivity_radius dos not exist. It however would warn new users, that activate tutorial mode.
  • Start a ManoptTestSuite subpackage to store dummy types and common test helpers in.

[0.5.2] – October 5, 2024

Added

  • three new symbols to easier state to record the :Gradient, the :GradientNorm, and the :Stepsize.

Changed

[0.5.1] – September 4, 2024

Changed

  • slightly improves the test for the ExponentialFamilyProjection text on the about page.

Added

  • the proximal_point method.

[0.5.0] – August 29, 2024

This breaking update is mainly concerned with improving a unified experience through all solvers and some usability improvements, such that for example the different gradient update rules are easier to specify.

In general we introduce a few factories, that avoid having to pass the manifold to keyword arguments

Added

  • A ManifoldDefaultsFactory that postpones the creation/allocation of manifold-specific fields in for example direction updates, step sizes and stopping criteria. As a rule of thumb, internal structures, like a solver state should store the final type. Any high-level interface, like the functions to start solvers, should accept such a factory in the appropriate places and call the internal _produce_type(factory, M), for example before passing something to the state.
  • a documentation_glossary.jl file containing a glossary of often used variables in fields, arguments, and keywords, to print them in a unified manner. The same for usual sections, tex, and math notation that is often used within the doc-strings.

Changed

  • Any Stepsize now hase a Stepsize struct used internally as the original structs before. The newly exported terms aim to fit stepsize=... in naming and create a ManifoldDefaultsFactory instead, so that any stepsize can be created without explicitly specifying the manifold.
    • ConstantStepsize is no longer exported, use ConstantLength instead. The length parameter is now a positional argument following the (optonal) manifold. Besides that ConstantLength works as before,just that omitting the manifold fills the one specified in the solver now.
    • DecreasingStepsize is no longer exported, use DecreasingLength instead. ConstantLength works as before,just that omitting the manifold fills the one specified in the solver now.
    • ArmijoLinesearch is now called ArmijoLinesearchStepsize. ArmijoLinesearch works as before,just that omitting the manifold fills the one specified in the solver now.
    • WolfePowellLinesearch is now called WolfePowellLinesearchStepsize, its constant c_1 is now unified with Armijo and called sufficient_decrease, c_2 was renamed to sufficient_curvature. Besides that, WolfePowellLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.
    • WolfePowellBinaryLinesearch is now called WolfePowellBinaryLinesearchStepsize, its constant c_1 is now unified with Armijo and called sufficient_decrease, c_2 was renamed to sufficient_curvature. Besides that, WolfePowellBinaryLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.
    • NonmonotoneLinesearch is now called NonmonotoneLinesearchStepsize. NonmonotoneLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.
    • AdaptiveWNGradient is now called AdaptiveWNGradientStepsize. Its second positional argument, the gradient function was only evaluated once for the gradient_bound default, so it has been replaced by the keyword X= accepting a tangent vector. The last positional argument p has also been moved to a keyword argument. Besides that, AdaptiveWNGradient works as before, just that omitting the manifold fills the one specified in the solver now.
  • Any DirectionUpdateRule now has the Rule in its name, since the original name is used to create the ManifoldDefaultsFactory instead. The original constructor now no longer requires the manifold as a parameter, that is later done in the factory. The Rule is, however, also no longer exported.
    • AverageGradient is now called AverageGradientRule. AverageGradient works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.
    • The IdentityUpdateRule now accepts a manifold optionally for consistency, and you can use Gradient() for short as well as its factory. Hence direction=Gradient() is now available.
    • MomentumGradient is now called MomentumGradientRule. MomentumGradient works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.
    • Nesterov is now called NesterovRule. Nesterov works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.
    • ConjugateDescentCoefficient is now called ConjugateDescentCoefficientRule. ConjugateDescentCoefficient works as before, but can now use the factory in between
    • the ConjugateGradientBealeRestart is now called ConjugateGradientBealeRestartRule. For the ConjugateGradientBealeRestart the manifold is now a first parameter, that is not necessary and no longer the manifold= keyword.
    • DaiYuanCoefficient is now called DaiYuanCoefficientRule. For the DaiYuanCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.
    • FletcherReevesCoefficient is now called FletcherReevesCoefficientRule. FletcherReevesCoefficient works as before, but can now use the factory in between
    • HagerZhangCoefficient is now called HagerZhangCoefficientRule. For the HagerZhangCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.
    • HestenesStiefelCoefficient is now called HestenesStiefelCoefficientRule. For the HestenesStiefelCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.
    • LiuStoreyCoefficient is now called LiuStoreyCoefficientRule. For the LiuStoreyCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.
    • PolakRibiereCoefficient is now called PolakRibiereCoefficientRule. For the PolakRibiereCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.
    • the SteepestDirectionUpdateRule is now called SteepestDescentCoefficientRule. The SteepestDescentCoefficient is equivalent, but creates the new factory interims wise.
    • AbstractGradientGroupProcessor is now called AbstractGradientGroupDirectionRule
      • the StochasticGradient is now called StochasticGradientRule. The StochasticGradient is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.
    • the AlternatingGradient is now called AlternatingGradientRule.
    The AlternatingGradient is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.
  • quasi_Newton had a keyword scale_initial_operator= that was inconsistently declared (sometimes bool, sometimes real) and was unused. It is now called initial_scale=1.0 and scales the initial (diagonal, unit) matrix within the approximation of the Hessian additionally to the $\frac{1}{\lVert g_k\rVert}$ scaling with the norm of the oldest gradient for the limited memory variant. For the full matrix variant the initial identity matrix is now scaled with this parameter.
  • Unify doc strings and presentation of keyword arguments
    • general indexing, for example in a vector, uses i
    • index for inequality constraints is unified to i running from 1,...,m
    • index for equality constraints is unified to j running from 1,...,n
    • iterations are using now k
  • get_manopt_parameter has been renamed to get_parameter since it is internal, so internally that is clear; accessing it from outside hence reads anyways Manopt.get_parameter
  • set_manopt_parameter! has been renamed to set_parameter! since it is internal, so internally that is clear; accessing it from outside hence reads Manopt.set_parameter!
  • changed the stabilize::Bool= keyword in quasi_Newton to the more flexible project!= keyword, this is also more in line with the other solvers. Internally the same is done within the QuasiNewtonLimitedMemoryDirectionUpdate. To adapt,
    • the previous stabilize=true is now set with (project!)=embed_project! in general, and if the manifold is represented by points in the embedding, like the sphere, (project!)=project! suffices
    • the new default is (project!)=copyto!, so by default no projection/stabilization is performed.
  • the positional argument p (usually the last or the third to last if subsolvers existed) has been moved to a keyword argument p= in all State constructors
  • in NelderMeadState the population moved from positional to keyword argument as well,
  • the way to initialise sub solvers in the solver states has been unified In the new variant
    • the sub_problem is always a positional argument; namely the last one
    • if the sub_state is given as a optional positional argument after the problem, it has to be a manopt solver state
    • you can provide the new ClosedFormSolverState(e::AbstractEvaluationType) for the state to indicate that the sub_problem is a closed form solution (function call) and how it has to be called
    • if you do not provide the sub_state as positional, the keyword evaluation= is used to generate the state ClosedFormSolverState.
    • when previously p and eventually X where positional arguments, they are now moved to keyword arguments of the same name for start point and tangent vector.
    • in detail
      • AdaptiveRegularizationState(M, sub_problem [, sub_state]; kwargs...) replaces the (anyways unused) variant to only provide the objective; both X and p moved to keyword arguments.
      • AugmentedLagrangianMethodState(M, objective, sub_problem; evaluation=...) was added
      • `AugmentedLagrangianMethodState(M, objective, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one
      • ExactPenaltyMethodState(M, sub_problem; evaluation=...) was added and ExactPenaltyMethodState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one
      • DifferenceOfConvexState(M, sub_problem; evaluation=...) was added and DifferenceOfConvexState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one
      • DifferenceOfConvexProximalState(M, sub_problem; evaluation=...) was added and DifferenceOfConvexProximalState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one
    • bumped Manifolds.jlto version 0.10; this mainly means that any algorithm working on a productmanifold and requiring ArrayPartition now has to explicitly do using RecursiveArrayTools.

Fixed

  • the AverageGradientRule filled its internal vector of gradients wrongly – or mixed it up in parallel transport. This is now fixed.

Removed

  • the convex_bundle_method and its ConvexBundleMethodState no longer accept the keywords k_size, p_estimate nor ϱ, they are superseded by just providing k_max.
  • the truncated_conjugate_gradient_descent(M, f, grad_f, hess_f) has the Hessian now a mandatory argument. To use the old variant, provide ApproxHessianFiniteDifference(M, copy(M, p), grad_f) to hess_f directly.
  • all deprecated keyword arguments and a few function signatures were removed:
    • get_equality_constraints, get_equality_constraints!, get_inequality_constraints, get_inequality_constraints! are removed. Use their singular forms and set the index to : instead.
    • StopWhenChangeLess(ε) is removed, use `StopWhenChangeLess(M, ε) instead to fill for example the retraction properly used to determine the change
  • In the WolfePowellLinesearch and WolfeBinaryLinesearchthe linesearch_stopsize= keyword is replaced by stop_when_stepsize_less=
  • DebugChange and RecordChange had a manifold= and a invretr keyword that were replaced by the first positional argument M and inverse_retraction_method=, respectively
  • in the NonlinearLeastSquaresObjective and LevenbergMarquardt the jacB= keyword is now called jacobian_tangent_basis=
  • in particle_swarm the n= keyword is replaced by swarm_size=.
  • update_stopping_criterion! has been removed and unified with set_parameter!. The code adaptions are
    • to set a parameter of a stopping criterion, just replace update_stopping_criterion!(sc, :Val, v) with set_parameter!(sc, :Val, v)
    • to update a stopping criterion in a solver state, replace the old update_stopping_criterion!(state, :Val, v) tat passed down to the stopping criterion by the explicit pass down with set_parameter!(state, :StoppingCriterion, :Val, v)

[0.4.69] – August 3, 2024

Changed

  • Improved performance of Interior Point Newton Method.

[0.4.68] – August 2, 2024

Added

  • an Interior Point Newton Method, the interior_point_newton
  • a conjugate_residual Algorithm to solve a linear system on a tangent space.
  • ArmijoLinesearch now allows for additional additional_decrease_condition and additional_increase_condition keywords to add further conditions to accept additional conditions when to accept an decreasing or increase of the stepsize.
  • add a DebugFeasibility to have a debug print about feasibility of points in constrained optimisation employing the new is_feasible function
  • add a InteriorPointCentralityCondition check that can be added for step candidates within the line search of interior_point_newton
  • Add Several new functors
    • the LagrangianCost, LagrangianGradient, LagrangianHessian, that based on a constrained objective allow to construct the hessian objective of its Lagrangian
    • the CondensedKKTVectorField and its CondensedKKTVectorFieldJacobian, that are being used to solve a linear system within interior_point_newton
    • the KKTVectorField as well as its KKTVectorFieldJacobian and `KKTVectorFieldAdjointJacobian
    • the KKTVectorFieldNormSq and its KKTVectorFieldNormSqGradient used within the Armijo line search of interior_point_newton
  • New stopping criteria
    • A StopWhenRelativeResidualLess for the conjugate_residual
    • A StopWhenKKTResidualLess for the interior_point_newton

[0.4.67] – July 25, 2024

Added

  • max_stepsize methods for Hyperrectangle.

Fixed

  • a few typos in the documentation
  • WolfePowellLinesearch no longer uses max_stepsize with invalid point by default.

[0.4.66] June 27, 2024

Changed

  • Remove functions estimate_sectional_curvature, ζ_1, ζ_2, close_point from convex_bundle_method
  • Remove some unused fields and arguments such as p_estimate, ϱ, α, from ConvexBundleMethodState in favor of jut k_max
  • Change parameter R placement in ProximalBundleMethodState to fifth position

[0.4.65] June 13, 2024

Changed

  • refactor stopping criteria to not store a sc.reason internally, but instead only generate the reason (and hence allocate a string) when actually asked for a reason.

[0.4.64] June 4, 2024

Added

  • Remodel the constraints and their gradients into separate VectorGradientFunctions to reduce code duplication and encapsulate the inner model of these functions and their gradients
  • Introduce a ConstrainedManoptProblem to model different ranges for the gradients in the new VectorGradientFunctions beyond the default NestedPowerRepresentation
  • introduce a VectorHessianFunction to also model that one can provide the vector of Hessians to constraints
  • introduce a more flexible indexing beyond single indexing, to also include arbitrary ranges when accessing vector functions and their gradients and hence also for constraints and their gradients.

Changed

  • Remodel ConstrainedManifoldObjective to store an AbstractManifoldObjective internally instead of directly f and grad_f, allowing also Hessian objectives therein and implementing access to this Hessian
  • Fixed a bug that Lanczos produced NaNs when started exactly in a minimizer, since we divide by the gradient norm.

Deprecated

  • deprecate get_grad_equality_constraints(M, o, p), use get_grad_equality_constraint(M, o, p, :) from the more flexible indexing instead.

[0.4.63] May 11, 2024

Added

  • :reinitialize_direction_update option for quasi-Newton behavior when the direction is not a descent one. It is now the new default for QuasiNewtonState.
  • Quasi-Newton direction update rules are now initialized upon start of the solver with the new internal function initialize_update!.

Fixed

  • ALM and EPM no longer keep a part of the quasi-Newton subsolver state between runs.

Changed

  • Quasi-Newton solvers: :reinitialize_direction_update is the new default behavior in case of detection of non-descent direction instead of :step_towards_negative_gradient. :step_towards_negative_gradient is still available when explicitly set using the nondescent_direction_behavior keyword argument.

[0.4.62] May 3, 2024

Changed

  • bumped dependency of ManifoldsBase.jl to 0.15.9 and imported their numerical verify functions. This changes the throw_error keyword used internally to a error= with a symbol.

[0.4.61] April 27, 2024

Added

  • Tests use Aqua.jl to spot problems in the code
  • introduce a feature-based list of solvers and reduce the details in the alphabetical list
  • adds a PolyakStepsize
  • added a get_subgradient for AbstractManifoldGradientObjectives since their gradient is a special case of a subgradient.

Fixed

  • get_last_stepsize was defined in quite different ways that caused ambiguities. That is now internally a bit restructured and should work nicer. Internally this means that the interim dispatch on get_last_stepsize(problem, state, step, vars...) was removed. Now the only two left are get_last_stepsize(p, s, vars...) and the one directly checking get_last_stepsize(::Stepsize) for stored values.
  • the accidentally exported set_manopt_parameter! is no longer exported

Changed

  • get_manopt_parameter and set_manopt_parameter! have been revised and better documented, they now use more semantic symbols (with capital letters) instead of direct field access (lower letter symbols). Since these are not exported, this is considered an internal, hence non-breaking change.
    • semantic symbols are now all nouns in upper case letters
    • :active is changed to :Activity

[0.4.60] April 10, 2024

Added

  • RecordWhenActive to allow records to be deactivated during runtime, symbol :WhenActive
  • RecordSubsolver to record the result of a subsolver recording in the main solver, symbol :Subsolver
  • RecordStoppingReason to record the reason a solver stopped
  • made the RecordFactory more flexible and quite similar to DebugFactory, such that it is now also easy to specify recordings at the end of solver runs. This can especially be used to record final states of sub solvers.

Changed

  • being a bit more strict with internal tools and made the factories for record non-exported, so this is the same as for debug.

Fixed

  • The name :Subsolver to generate DebugWhenActive was misleading, it is now called :WhenActive referring to “print debug only when set active, that is by the parent (main) solver”.
  • the old version of specifying Symbol => RecordAction for later access was ambiguous, since

it could also mean to store the action in the dictionary under that symbol. Hence the order for access was switched to RecordAction => Symbol to resolve that ambiguity.

[0.4.59] April 7, 2024

Added

  • A Riemannian variant of the CMA-ES (Covariance Matrix Adaptation Evolutionary Strategy) algorithm, cma_es.

Fixed

  • The constructor dispatch for StopWhenAny with Vector had incorrect element type assertion which was fixed.

[0.4.58] March 18, 2024

Added

  • more advanced methods to add debug to the beginning of an algorithm, a step, or the end of the algorithm with DebugAction entries at :Start, :BeforeIteration, :Iteration, and :Stop, respectively.
  • Introduce a Pair-based format to add elements to these hooks, while all others ar now added to :Iteration (no longer to :All)
  • (planned) add an easy possibility to also record the initial stage and not only after the first iteration.

Changed

  • Changed the symbol for the :Step dictionary to be :Iteration, to unify this with the symbols used in recording, and removed the :All symbol. On the fine granular scale, all but :Start debugs are now reset on init. Since these are merely internal entries in the debug dictionary, this is considered non-breaking.
  • introduce a StopWhenSwarmVelocityLess stopping criterion for particle_swarm replacing the current default of the swarm change, since this is a bit more effective to compute

Fixed

  • fixed the outdated documentation of TruncatedConjugateGradientState, that now correctly state that p is no longer stored, but the algorithm runs on TpM.
  • implemented the missing get_iterate for TruncatedConjugateGradientState.

[0.4.57] March 15, 2024

Changed

  • convex_bundle_method uses the sectional_curvature from ManifoldsBase.jl.
  • convex_bundle_method no longer has the unused k_min keyword argument.
  • ManifoldsBase.jl now is running on Documenter 1.3, Manopt.jl documentation now uses DocumenterInterLinks to refer to sections and functions from ManifoldsBase.jl

Fixed

  • fixes a type that when passing sub_kwargs to trust_regions caused an error in the decoration of the sub objective.

[0.4.56] March 4, 2024

Added

  • The option :step_towards_negative_gradient for nondescent_direction_behavior in quasi-Newton solvers does no longer emit a warning by default. This has been moved to a message, that can be accessed/displayed with DebugMessages
  • DebugMessages now has a second positional argument, specifying whether all messages, or just the first (:Once) should be displayed.

[0.4.55] March 3, 2024

Added

  • Option nondescent_direction_behavior for quasi-Newton solvers. By default it checks for non-descent direction which may not be handled well by some stepsize selection algorithms.

Fixed

  • unified documentation, especially function signatures further.
  • fixed a few typos related to math formulae in the doc strings.

[0.4.54] February 28, 2024

Added

  • convex_bundle_method optimization algorithm for non-smooth geodesically convex functions
  • proximal_bundle_method optimization algorithm for non-smooth functions.
  • StopWhenSubgradientNormLess, StopWhenLagrangeMultiplierLess, and stopping criteria.

Fixed

  • Doc strings now follow a vale.sh policy. Though this is not fully working, this PR improves a lot of the doc strings concerning wording and spelling.

[0.4.53] February 13, 2024

Fixed

  • fixes two storage action defaults, that accidentally still tried to initialize a :Population (as modified back to :Iterate 0.4.49).
  • fix a few typos in the documentation and add a reference for the subgradient method.

[0.4.52] February 5, 2024

Added

  • introduce an environment persistent way of setting global values with the set_manopt_parameter! function using Preferences.jl.
  • introduce such a value named :Mode to enable a "Tutorial" mode that shall often provide more warnings and information for people getting started with optimisation on manifolds

[0.4.51] January 30, 2024

Added

  • A StopWhenSubgradientNormLess stopping criterion for subgradient-based optimization.
  • Allow the message= of the DebugIfEntry debug action to contain a format element to print the field in the message as well.

[0.4.50] January 26, 2024

Fixed

  • Fix Quasi Newton on complex manifolds.

[0.4.49] January 18, 2024

Added

  • A StopWhenEntryChangeLess to be able to stop on arbitrary small changes of specific fields
  • generalises StopWhenGradientNormLess to accept arbitrary norm= functions
  • refactor the default in particle_swarm to no longer “misuse” the iteration change, but actually the new one the :swarm entry

[0.4.48] January 16, 2024

Fixed

  • fixes an imprecision in the interface of get_iterate that sometimes led to the swarm of particle_swarm being returned as the iterate.
  • refactor particle_swarm in naming and access functions to avoid this also in the future. To access the whole swarm, one now should use get_manopt_parameter(pss, :Population)

[0.4.47] January 6, 2024

Fixed

  • fixed a bug, where the retraction set in check_Hessian was not passed on to the optional inner check_gradient call, which could lead to unwanted side effects, see #342.

[0.4.46] January 1, 2024

Changed

  • An error is thrown when a line search from LineSearches.jl reports search failure.
  • Changed default stopping criterion in ALM algorithm to mitigate an issue occurring when step size is very small.
  • Default memory length in default ALM subsolver is now capped at manifold dimension.
  • Replaced CI testing on Julia 1.8 with testing on Julia 1.10.

Fixed

  • A bug in LineSearches.jl extension leading to slower convergence.
  • Fixed a bug in L-BFGS related to memory storage, which caused significantly slower convergence.

[0.4.45] December 28, 2023

Added

  • Introduce sub_kwargs and sub_stopping_criterion for trust_regions as noticed in #336

Changed

  • WolfePowellLineSearch, ArmijoLineSearch step sizes now allocate less
  • linesearch_backtrack! is now available
  • Quasi Newton Updates can work in-place of a direction vector as well.
  • Faster safe_indices in L-BFGS.

[0.4.44] December 12, 2023

Formally one could consider this version breaking, since a few functions have been moved, that in earlier versions (0.3.x) have been used in example scripts. These examples are now available again within ManoptExamples.jl, and with their “reappearance” the corresponding costs, gradients, differentials, adjoint differentials, and proximal maps have been moved there as well. This is not considered breaking, since the functions were only used in the old, removed examples. Each and every moved function is still documented. They have been partly renamed, and their documentation and testing has been extended.

Changed

[0.4.43] November 19, 2023

Added

  • vale.sh as a CI to keep track of a consistent documentation

[0.4.42] November 6, 2023

Added

  • add Manopt.JuMP_Optimizer implementing JuMP's solver interface

[0.4.41] November 2, 2023

Changed

  • trust_regions is now more flexible and the sub solver (Steihaug-Toint tCG by default) can now be exchanged.
  • adaptive_regularization_with_cubics is now more flexible as well, where it previously was a bit too much tightened to the Lanczos solver as well.
  • Unified documentation notation and bumped dependencies to use DocumenterCitations 1.3

[0.4.40] October 24, 2023

Added

  • add a --help argument to docs/make.jl to document all available command line arguments
  • add a --exclude-tutorials argument to docs/make.jl. This way, when quarto is not available on a computer, the docs can still be build with the tutorials not being added to the menu such that documenter does not expect them to exist.

Changes

  • Bump dependencies to ManifoldsBase.jl 0.15 and Manifolds.jl 0.9
  • move the ARC CG subsolver to the main package, since TangentSpace is now already available from ManifoldsBase.

[0.4.39] October 9, 2023

Changes

  • also use the pair of a retraction and the inverse retraction (see last update) to perform the relaxation within the Douglas-Rachford algorithm.

[0.4.38] October 8, 2023

Changes

  • avoid allocations when calling get_jacobian! within the Levenberg-Marquard Algorithm.

Fixed

  • Fix a lot of typos in the documentation

[0.4.37] September 28, 2023

Changes

  • add more of the Riemannian Levenberg-Marquard algorithms parameters as keywords, so they can be changed on call
  • generalize the internal reflection of Douglas-Rachford, such that is also works with an arbitrary pair of a reflection and an inverse reflection.

[0.4.36] September 20, 2023

Fixed

  • Fixed a bug that caused non-matrix points and vectors to fail when working with approximate

[0.4.35] September 14, 2023

Added

  • The access to functions of the objective is now unified and encapsulated in proper get_ functions.

[0.4.34] September 02, 2023

Added

  • an ManifoldEuclideanGradientObjective to allow the cost, gradient, and Hessian and other first or second derivative based elements to be Euclidean and converted when needed.
  • a keyword objective_type=:Euclidean for all solvers, that specifies that an Objective shall be created of the new type

[0.4.33] August 24, 2023

Added

  • ConstantStepsize and DecreasingStepsize now have an additional field type::Symbol to assess whether the step-size should be relatively (to the gradient norm) or absolutely constant.

[0.4.32] August 23, 2023

Added

  • The adaptive regularization with cubics (ARC) solver.

[0.4.31] August 14, 2023

Added

  • A :Subsolver keyword in the debug= keyword argument, that activates the new DebugWhenActiveto de/activate subsolver debug from the main solversDebugEvery`.

[0.4.30] August 3, 2023

Changed

  • References in the documentation are now rendered using DocumenterCitations.jl
  • Asymptote export now also accepts a size in pixel instead of its default 4cm size and render can be deactivated setting it to nothing.

[0.4.29] July 12, 2023

Fixed

  • fixed a bug, where cyclic_proximal_point did not work with decorated objectives.

[0.4.28] June 24, 2023

Changed

  • max_stepsize was specialized for FixedRankManifold to follow Matlab Manopt.

[0.4.27] June 15, 2023

Added

  • The AdaptiveWNGrad stepsize is available as a new stepsize functor.

Fixed

  • Levenberg-Marquardt now possesses its parameters initial_residual_values and initial_jacobian_f also as keyword arguments, such that their default initialisations can be adapted, if necessary

[0.4.26] June 11, 2023

Added

  • simplify usage of gradient descent as sub solver in the DoC solvers.
  • add a get_state function
  • document indicates_convergence.

[0.4.25] June 5, 2023

Fixed

  • Fixes an allocation bug in the difference of convex algorithm

[0.4.24] June 4, 2023

Added

  • another workflow that deletes old PR renderings from the docs to keep them smaller in overall size.

Changes

  • bump dependencies since the extension between Manifolds.jl and ManifoldsDiff.jl has been moved to Manifolds.jl

[0.4.23] June 4, 2023

Added

  • More details on the Count and Cache tutorial

Changed

  • loosen constraints slightly

[0.4.22] May 31, 2023

Added

  • A tutorial on how to implement a solver

[0.4.21] May 22, 2023

Added

  • A ManifoldCacheObjective as a decorator for objectives to cache results of calls, using LRU Caches as a weak dependency. For now this works with cost and gradient evaluations
  • A ManifoldCountObjective as a decorator for objectives to enable counting of calls to for example the cost and the gradient
  • adds a return_objective keyword, that switches the return of a solver to a tuple (o, s), where o is the (possibly decorated) objective, and s is the “classical” solver return (state or point). This way the counted values can be accessed and the cache can be reused.
  • change solvers on the mid level (form solver(M, objective, p)) to also accept decorated objectives

Changed

  • Switch all Requires weak dependencies to actual weak dependencies starting in Julia 1.9

[0.4.20] May 11, 2023

Changed

  • the default tolerances for the numerical check_ functions were loosened a bit, such that check_vector can also be changed in its tolerances.

[0.4.19] May 7, 2023

Added

  • the sub solver for trust_regions is now customizable and can now be exchanged.

Changed

  • slightly changed the definitions of the solver states for ALM and EPM to be type stable

[0.4.18] May 4, 2023

Added

  • A function check_Hessian(M, f, grad_f, Hess_f) to numerically verify the (Riemannian) Hessian of a function f

[0.4.17] April 28, 2023

Added

  • A new interface of the form alg(M, objective, p0) to allow to reuse objectives without creating AbstractManoptSolverStates and calling solve!. This especially still allows for any decoration of the objective and/or the state using debug=, or record=.

Changed

  • All solvers now have the initial point p as an optional parameter making it more accessible to first time users, gradient_descent(M, f, grad_f) is equivalent to gradient_descent(M, f, grad_f, rand(M))

Fixed

  • Unified the framework to work on manifold where points are represented by numbers for several solvers

[0.4.16] April 18, 2023

Fixed

  • the inner products used in truncated_gradient_descent now also work thoroughly on complex matrix manifolds

[0.4.15] April 13, 2023

Changed

  • trust_regions(M, f, grad_f, hess_f, p) now has the Hessian hess_f as well as the start point p0 as an optional parameter and approximate it otherwise.
  • trust_regions!(M, f, grad_f, hess_f, p) has the Hessian as an optional parameter and approximate it otherwise.

Removed

  • support for ManifoldsBase.jl 0.13.x, since with the definition of copy(M,p::Number), in 0.14.4, that one is used instead of defining it ourselves.

[0.4.14] April 06, 2023

Changed

  • particle_swarm now uses much more in-place operations

Fixed

  • particle_swarm used quite a few deepcopy(p) commands still, which were replaced by copy(M, p)

[0.4.13] April 09, 2023

Added

  • get_message to obtain messages from sub steps of a solver
  • DebugMessages to display the new messages in debug
  • safeguards in Armijo line search and L-BFGS against numerical over- and underflow that report in messages

[0.4.12] April 4, 2023

Added

[0.4.11] March 27, 2023

Changed

  • adapt tolerances in tests to the speed/accuracy optimized distance on the sphere in Manifolds.jl (part II)

[0.4.10] March 26, 2023

Changed

  • adapt tolerances in tests to the speed/accuracy optimized distance on the sphere in Manifolds.jl

[0.4.9] March 3, 2023

Added

[0.4.8] February 21, 2023

Added

  • a status_summary that displays the main parameters within several structures of Manopt, most prominently a solver state

Changed

  • Improved storage performance by introducing separate named tuples for points and vectors
  • changed the show methods of AbstractManoptSolverStates to display their `state_summary
  • Move tutorials to be rendered with Quarto into the documentation.

[0.4.7] February 14, 2023

Changed

  • Bump [compat] entry of ManifoldDiff to also include 0.3

[0.4.6] February 3, 2023

Fixed

  • Fixed a few stopping criteria even indicated to stop before the algorithm started.

[0.4.5] January 24, 2023

Changed

  • the new default functions that include p are used where possible
  • a first step towards faster storage handling

[0.4.4] January 20, 2023

Added

  • Introduce ConjugateGradientBealeRestart to allow CG restarts using Beale‘s rule

Fixed

  • fix a type in HestenesStiefelCoefficient

[0.4.3] January 17, 2023

Fixed

  • the CG coefficient β can now be complex
  • fix a bug in grad_distance

[0.4.2] January 16, 2023

Changed

  • the usage of inner in line search methods, such that they work well with complex manifolds as well

[0.4.1] January 15, 2023

Fixed

  • a max_stepsize per manifold to avoid leaving the injectivity radius, which it also defaults to

[0.4.0] January 10, 2023

Added

  • Dependency on ManifoldDiff.jl and a start of moving actual derivatives, differentials, and gradients there.
  • AbstractManifoldObjective to store the objective within the AbstractManoptProblem
  • Introduce a CostGrad structure to store a function that computes the cost and gradient within one function.
  • started a changelog.md to thoroughly keep track of changes

Changed

  • AbstractManoptProblem replaces Problem
  • the problem now contains a
  • AbstractManoptSolverState replaces Options
  • random_point(M) is replaced by rand(M) from `ManifoldsBase.jl
  • random_tangent(M, p) is replaced by rand(M; vector_at=p)
+Changelog · Manopt.jl

Changelog

All notable Changes to the Julia package Manopt.jl will be documented in this file. The file was started with Version 0.4.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

[0.5.4] - unreleased

Added

  • An automated detection whether the tutorials are present if not an also no quarto run is done, an automated --exlcude-tutorials option is added.

[0.5.3] – October 18, 2024

Added

  • StopWhenChangeLess, StopWhenGradientChangeLess and StopWhenGradientLess can now use the new idea (ManifoldsBase.jl 0.15.18) of different outer norms on manifolds with components like power and product manifolds and all others that support this from the Manifolds.jl Library, like Euclidean

Changed

  • stabilize max_Stepzise to also work when injectivity_radius dos not exist. It however would warn new users, that activate tutorial mode.
  • Start a ManoptTestSuite subpackage to store dummy types and common test helpers in.

[0.5.2] – October 5, 2024

Added

  • three new symbols to easier state to record the :Gradient, the :GradientNorm, and the :Stepsize.

Changed

[0.5.1] – September 4, 2024

Changed

  • slightly improves the test for the ExponentialFamilyProjection text on the about page.

Added

  • the proximal_point method.

[0.5.0] – August 29, 2024

This breaking update is mainly concerned with improving a unified experience through all solvers and some usability improvements, such that for example the different gradient update rules are easier to specify.

In general we introduce a few factories, that avoid having to pass the manifold to keyword arguments

Added

  • A ManifoldDefaultsFactory that postpones the creation/allocation of manifold-specific fields in for example direction updates, step sizes and stopping criteria. As a rule of thumb, internal structures, like a solver state should store the final type. Any high-level interface, like the functions to start solvers, should accept such a factory in the appropriate places and call the internal _produce_type(factory, M), for example before passing something to the state.
  • a documentation_glossary.jl file containing a glossary of often used variables in fields, arguments, and keywords, to print them in a unified manner. The same for usual sections, tex, and math notation that is often used within the doc-strings.

Changed

  • Any Stepsize now hase a Stepsize struct used internally as the original structs before. The newly exported terms aim to fit stepsize=... in naming and create a ManifoldDefaultsFactory instead, so that any stepsize can be created without explicitly specifying the manifold.
    • ConstantStepsize is no longer exported, use ConstantLength instead. The length parameter is now a positional argument following the (optonal) manifold. Besides that ConstantLength works as before,just that omitting the manifold fills the one specified in the solver now.
    • DecreasingStepsize is no longer exported, use DecreasingLength instead. ConstantLength works as before,just that omitting the manifold fills the one specified in the solver now.
    • ArmijoLinesearch is now called ArmijoLinesearchStepsize. ArmijoLinesearch works as before,just that omitting the manifold fills the one specified in the solver now.
    • WolfePowellLinesearch is now called WolfePowellLinesearchStepsize, its constant c_1 is now unified with Armijo and called sufficient_decrease, c_2 was renamed to sufficient_curvature. Besides that, WolfePowellLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.
    • WolfePowellBinaryLinesearch is now called WolfePowellBinaryLinesearchStepsize, its constant c_1 is now unified with Armijo and called sufficient_decrease, c_2 was renamed to sufficient_curvature. Besides that, WolfePowellBinaryLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.
    • NonmonotoneLinesearch is now called NonmonotoneLinesearchStepsize. NonmonotoneLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.
    • AdaptiveWNGradient is now called AdaptiveWNGradientStepsize. Its second positional argument, the gradient function was only evaluated once for the gradient_bound default, so it has been replaced by the keyword X= accepting a tangent vector. The last positional argument p has also been moved to a keyword argument. Besides that, AdaptiveWNGradient works as before, just that omitting the manifold fills the one specified in the solver now.
  • Any DirectionUpdateRule now has the Rule in its name, since the original name is used to create the ManifoldDefaultsFactory instead. The original constructor now no longer requires the manifold as a parameter, that is later done in the factory. The Rule is, however, also no longer exported.
    • AverageGradient is now called AverageGradientRule. AverageGradient works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.
    • The IdentityUpdateRule now accepts a manifold optionally for consistency, and you can use Gradient() for short as well as its factory. Hence direction=Gradient() is now available.
    • MomentumGradient is now called MomentumGradientRule. MomentumGradient works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.
    • Nesterov is now called NesterovRule. Nesterov works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.
    • ConjugateDescentCoefficient is now called ConjugateDescentCoefficientRule. ConjugateDescentCoefficient works as before, but can now use the factory in between
    • the ConjugateGradientBealeRestart is now called ConjugateGradientBealeRestartRule. For the ConjugateGradientBealeRestart the manifold is now a first parameter, that is not necessary and no longer the manifold= keyword.
    • DaiYuanCoefficient is now called DaiYuanCoefficientRule. For the DaiYuanCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.
    • FletcherReevesCoefficient is now called FletcherReevesCoefficientRule. FletcherReevesCoefficient works as before, but can now use the factory in between
    • HagerZhangCoefficient is now called HagerZhangCoefficientRule. For the HagerZhangCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.
    • HestenesStiefelCoefficient is now called HestenesStiefelCoefficientRule. For the HestenesStiefelCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.
    • LiuStoreyCoefficient is now called LiuStoreyCoefficientRule. For the LiuStoreyCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.
    • PolakRibiereCoefficient is now called PolakRibiereCoefficientRule. For the PolakRibiereCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.
    • the SteepestDirectionUpdateRule is now called SteepestDescentCoefficientRule. The SteepestDescentCoefficient is equivalent, but creates the new factory interims wise.
    • AbstractGradientGroupProcessor is now called AbstractGradientGroupDirectionRule
      • the StochasticGradient is now called StochasticGradientRule. The StochasticGradient is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.
    • the AlternatingGradient is now called AlternatingGradientRule.
    The AlternatingGradient is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.
  • quasi_Newton had a keyword scale_initial_operator= that was inconsistently declared (sometimes bool, sometimes real) and was unused. It is now called initial_scale=1.0 and scales the initial (diagonal, unit) matrix within the approximation of the Hessian additionally to the $\frac{1}{\lVert g_k\rVert}$ scaling with the norm of the oldest gradient for the limited memory variant. For the full matrix variant the initial identity matrix is now scaled with this parameter.
  • Unify doc strings and presentation of keyword arguments
    • general indexing, for example in a vector, uses i
    • index for inequality constraints is unified to i running from 1,...,m
    • index for equality constraints is unified to j running from 1,...,n
    • iterations are using now k
  • get_manopt_parameter has been renamed to get_parameter since it is internal, so internally that is clear; accessing it from outside hence reads anyways Manopt.get_parameter
  • set_manopt_parameter! has been renamed to set_parameter! since it is internal, so internally that is clear; accessing it from outside hence reads Manopt.set_parameter!
  • changed the stabilize::Bool= keyword in quasi_Newton to the more flexible project!= keyword, this is also more in line with the other solvers. Internally the same is done within the QuasiNewtonLimitedMemoryDirectionUpdate. To adapt,
    • the previous stabilize=true is now set with (project!)=embed_project! in general, and if the manifold is represented by points in the embedding, like the sphere, (project!)=project! suffices
    • the new default is (project!)=copyto!, so by default no projection/stabilization is performed.
  • the positional argument p (usually the last or the third to last if subsolvers existed) has been moved to a keyword argument p= in all State constructors
  • in NelderMeadState the population moved from positional to keyword argument as well,
  • the way to initialise sub solvers in the solver states has been unified In the new variant
    • the sub_problem is always a positional argument; namely the last one
    • if the sub_state is given as a optional positional argument after the problem, it has to be a manopt solver state
    • you can provide the new ClosedFormSolverState(e::AbstractEvaluationType) for the state to indicate that the sub_problem is a closed form solution (function call) and how it has to be called
    • if you do not provide the sub_state as positional, the keyword evaluation= is used to generate the state ClosedFormSolverState.
    • when previously p and eventually X where positional arguments, they are now moved to keyword arguments of the same name for start point and tangent vector.
    • in detail
      • AdaptiveRegularizationState(M, sub_problem [, sub_state]; kwargs...) replaces the (anyways unused) variant to only provide the objective; both X and p moved to keyword arguments.
      • AugmentedLagrangianMethodState(M, objective, sub_problem; evaluation=...) was added
      • `AugmentedLagrangianMethodState(M, objective, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one
      • ExactPenaltyMethodState(M, sub_problem; evaluation=...) was added and ExactPenaltyMethodState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one
      • DifferenceOfConvexState(M, sub_problem; evaluation=...) was added and DifferenceOfConvexState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one
      • DifferenceOfConvexProximalState(M, sub_problem; evaluation=...) was added and DifferenceOfConvexProximalState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one
    • bumped Manifolds.jlto version 0.10; this mainly means that any algorithm working on a productmanifold and requiring ArrayPartition now has to explicitly do using RecursiveArrayTools.

Fixed

  • the AverageGradientRule filled its internal vector of gradients wrongly – or mixed it up in parallel transport. This is now fixed.

Removed

  • the convex_bundle_method and its ConvexBundleMethodState no longer accept the keywords k_size, p_estimate nor ϱ, they are superseded by just providing k_max.
  • the truncated_conjugate_gradient_descent(M, f, grad_f, hess_f) has the Hessian now a mandatory argument. To use the old variant, provide ApproxHessianFiniteDifference(M, copy(M, p), grad_f) to hess_f directly.
  • all deprecated keyword arguments and a few function signatures were removed:
    • get_equality_constraints, get_equality_constraints!, get_inequality_constraints, get_inequality_constraints! are removed. Use their singular forms and set the index to : instead.
    • StopWhenChangeLess(ε) is removed, use `StopWhenChangeLess(M, ε) instead to fill for example the retraction properly used to determine the change
  • In the WolfePowellLinesearch and WolfeBinaryLinesearchthe linesearch_stopsize= keyword is replaced by stop_when_stepsize_less=
  • DebugChange and RecordChange had a manifold= and a invretr keyword that were replaced by the first positional argument M and inverse_retraction_method=, respectively
  • in the NonlinearLeastSquaresObjective and LevenbergMarquardt the jacB= keyword is now called jacobian_tangent_basis=
  • in particle_swarm the n= keyword is replaced by swarm_size=.
  • update_stopping_criterion! has been removed and unified with set_parameter!. The code adaptions are
    • to set a parameter of a stopping criterion, just replace update_stopping_criterion!(sc, :Val, v) with set_parameter!(sc, :Val, v)
    • to update a stopping criterion in a solver state, replace the old update_stopping_criterion!(state, :Val, v) tat passed down to the stopping criterion by the explicit pass down with set_parameter!(state, :StoppingCriterion, :Val, v)

[0.4.69] – August 3, 2024

Changed

  • Improved performance of Interior Point Newton Method.

[0.4.68] – August 2, 2024

Added

  • an Interior Point Newton Method, the interior_point_newton
  • a conjugate_residual Algorithm to solve a linear system on a tangent space.
  • ArmijoLinesearch now allows for additional additional_decrease_condition and additional_increase_condition keywords to add further conditions to accept additional conditions when to accept an decreasing or increase of the stepsize.
  • add a DebugFeasibility to have a debug print about feasibility of points in constrained optimisation employing the new is_feasible function
  • add a InteriorPointCentralityCondition check that can be added for step candidates within the line search of interior_point_newton
  • Add Several new functors
    • the LagrangianCost, LagrangianGradient, LagrangianHessian, that based on a constrained objective allow to construct the hessian objective of its Lagrangian
    • the CondensedKKTVectorField and its CondensedKKTVectorFieldJacobian, that are being used to solve a linear system within interior_point_newton
    • the KKTVectorField as well as its KKTVectorFieldJacobian and `KKTVectorFieldAdjointJacobian
    • the KKTVectorFieldNormSq and its KKTVectorFieldNormSqGradient used within the Armijo line search of interior_point_newton
  • New stopping criteria
    • A StopWhenRelativeResidualLess for the conjugate_residual
    • A StopWhenKKTResidualLess for the interior_point_newton

[0.4.67] – July 25, 2024

Added

  • max_stepsize methods for Hyperrectangle.

Fixed

  • a few typos in the documentation
  • WolfePowellLinesearch no longer uses max_stepsize with invalid point by default.

[0.4.66] June 27, 2024

Changed

  • Remove functions estimate_sectional_curvature, ζ_1, ζ_2, close_point from convex_bundle_method
  • Remove some unused fields and arguments such as p_estimate, ϱ, α, from ConvexBundleMethodState in favor of jut k_max
  • Change parameter R placement in ProximalBundleMethodState to fifth position

[0.4.65] June 13, 2024

Changed

  • refactor stopping criteria to not store a sc.reason internally, but instead only generate the reason (and hence allocate a string) when actually asked for a reason.

[0.4.64] June 4, 2024

Added

  • Remodel the constraints and their gradients into separate VectorGradientFunctions to reduce code duplication and encapsulate the inner model of these functions and their gradients
  • Introduce a ConstrainedManoptProblem to model different ranges for the gradients in the new VectorGradientFunctions beyond the default NestedPowerRepresentation
  • introduce a VectorHessianFunction to also model that one can provide the vector of Hessians to constraints
  • introduce a more flexible indexing beyond single indexing, to also include arbitrary ranges when accessing vector functions and their gradients and hence also for constraints and their gradients.

Changed

  • Remodel ConstrainedManifoldObjective to store an AbstractManifoldObjective internally instead of directly f and grad_f, allowing also Hessian objectives therein and implementing access to this Hessian
  • Fixed a bug that Lanczos produced NaNs when started exactly in a minimizer, since we divide by the gradient norm.

Deprecated

  • deprecate get_grad_equality_constraints(M, o, p), use get_grad_equality_constraint(M, o, p, :) from the more flexible indexing instead.

[0.4.63] May 11, 2024

Added

  • :reinitialize_direction_update option for quasi-Newton behavior when the direction is not a descent one. It is now the new default for QuasiNewtonState.
  • Quasi-Newton direction update rules are now initialized upon start of the solver with the new internal function initialize_update!.

Fixed

  • ALM and EPM no longer keep a part of the quasi-Newton subsolver state between runs.

Changed

  • Quasi-Newton solvers: :reinitialize_direction_update is the new default behavior in case of detection of non-descent direction instead of :step_towards_negative_gradient. :step_towards_negative_gradient is still available when explicitly set using the nondescent_direction_behavior keyword argument.

[0.4.62] May 3, 2024

Changed

  • bumped dependency of ManifoldsBase.jl to 0.15.9 and imported their numerical verify functions. This changes the throw_error keyword used internally to a error= with a symbol.

[0.4.61] April 27, 2024

Added

  • Tests use Aqua.jl to spot problems in the code
  • introduce a feature-based list of solvers and reduce the details in the alphabetical list
  • adds a PolyakStepsize
  • added a get_subgradient for AbstractManifoldGradientObjectives since their gradient is a special case of a subgradient.

Fixed

  • get_last_stepsize was defined in quite different ways that caused ambiguities. That is now internally a bit restructured and should work nicer. Internally this means that the interim dispatch on get_last_stepsize(problem, state, step, vars...) was removed. Now the only two left are get_last_stepsize(p, s, vars...) and the one directly checking get_last_stepsize(::Stepsize) for stored values.
  • the accidentally exported set_manopt_parameter! is no longer exported

Changed

  • get_manopt_parameter and set_manopt_parameter! have been revised and better documented, they now use more semantic symbols (with capital letters) instead of direct field access (lower letter symbols). Since these are not exported, this is considered an internal, hence non-breaking change.
    • semantic symbols are now all nouns in upper case letters
    • :active is changed to :Activity

[0.4.60] April 10, 2024

Added

  • RecordWhenActive to allow records to be deactivated during runtime, symbol :WhenActive
  • RecordSubsolver to record the result of a subsolver recording in the main solver, symbol :Subsolver
  • RecordStoppingReason to record the reason a solver stopped
  • made the RecordFactory more flexible and quite similar to DebugFactory, such that it is now also easy to specify recordings at the end of solver runs. This can especially be used to record final states of sub solvers.

Changed

  • being a bit more strict with internal tools and made the factories for record non-exported, so this is the same as for debug.

Fixed

  • The name :Subsolver to generate DebugWhenActive was misleading, it is now called :WhenActive referring to “print debug only when set active, that is by the parent (main) solver”.
  • the old version of specifying Symbol => RecordAction for later access was ambiguous, since

it could also mean to store the action in the dictionary under that symbol. Hence the order for access was switched to RecordAction => Symbol to resolve that ambiguity.

[0.4.59] April 7, 2024

Added

  • A Riemannian variant of the CMA-ES (Covariance Matrix Adaptation Evolutionary Strategy) algorithm, cma_es.

Fixed

  • The constructor dispatch for StopWhenAny with Vector had incorrect element type assertion which was fixed.

[0.4.58] March 18, 2024

Added

  • more advanced methods to add debug to the beginning of an algorithm, a step, or the end of the algorithm with DebugAction entries at :Start, :BeforeIteration, :Iteration, and :Stop, respectively.
  • Introduce a Pair-based format to add elements to these hooks, while all others ar now added to :Iteration (no longer to :All)
  • (planned) add an easy possibility to also record the initial stage and not only after the first iteration.

Changed

  • Changed the symbol for the :Step dictionary to be :Iteration, to unify this with the symbols used in recording, and removed the :All symbol. On the fine granular scale, all but :Start debugs are now reset on init. Since these are merely internal entries in the debug dictionary, this is considered non-breaking.
  • introduce a StopWhenSwarmVelocityLess stopping criterion for particle_swarm replacing the current default of the swarm change, since this is a bit more effective to compute

Fixed

  • fixed the outdated documentation of TruncatedConjugateGradientState, that now correctly state that p is no longer stored, but the algorithm runs on TpM.
  • implemented the missing get_iterate for TruncatedConjugateGradientState.

[0.4.57] March 15, 2024

Changed

  • convex_bundle_method uses the sectional_curvature from ManifoldsBase.jl.
  • convex_bundle_method no longer has the unused k_min keyword argument.
  • ManifoldsBase.jl now is running on Documenter 1.3, Manopt.jl documentation now uses DocumenterInterLinks to refer to sections and functions from ManifoldsBase.jl

Fixed

  • fixes a type that when passing sub_kwargs to trust_regions caused an error in the decoration of the sub objective.

[0.4.56] March 4, 2024

Added

  • The option :step_towards_negative_gradient for nondescent_direction_behavior in quasi-Newton solvers does no longer emit a warning by default. This has been moved to a message, that can be accessed/displayed with DebugMessages
  • DebugMessages now has a second positional argument, specifying whether all messages, or just the first (:Once) should be displayed.

[0.4.55] March 3, 2024

Added

  • Option nondescent_direction_behavior for quasi-Newton solvers. By default it checks for non-descent direction which may not be handled well by some stepsize selection algorithms.

Fixed

  • unified documentation, especially function signatures further.
  • fixed a few typos related to math formulae in the doc strings.

[0.4.54] February 28, 2024

Added

  • convex_bundle_method optimization algorithm for non-smooth geodesically convex functions
  • proximal_bundle_method optimization algorithm for non-smooth functions.
  • StopWhenSubgradientNormLess, StopWhenLagrangeMultiplierLess, and stopping criteria.

Fixed

  • Doc strings now follow a vale.sh policy. Though this is not fully working, this PR improves a lot of the doc strings concerning wording and spelling.

[0.4.53] February 13, 2024

Fixed

  • fixes two storage action defaults, that accidentally still tried to initialize a :Population (as modified back to :Iterate 0.4.49).
  • fix a few typos in the documentation and add a reference for the subgradient method.

[0.4.52] February 5, 2024

Added

  • introduce an environment persistent way of setting global values with the set_manopt_parameter! function using Preferences.jl.
  • introduce such a value named :Mode to enable a "Tutorial" mode that shall often provide more warnings and information for people getting started with optimisation on manifolds

[0.4.51] January 30, 2024

Added

  • A StopWhenSubgradientNormLess stopping criterion for subgradient-based optimization.
  • Allow the message= of the DebugIfEntry debug action to contain a format element to print the field in the message as well.

[0.4.50] January 26, 2024

Fixed

  • Fix Quasi Newton on complex manifolds.

[0.4.49] January 18, 2024

Added

  • A StopWhenEntryChangeLess to be able to stop on arbitrary small changes of specific fields
  • generalises StopWhenGradientNormLess to accept arbitrary norm= functions
  • refactor the default in particle_swarm to no longer “misuse” the iteration change, but actually the new one the :swarm entry

[0.4.48] January 16, 2024

Fixed

  • fixes an imprecision in the interface of get_iterate that sometimes led to the swarm of particle_swarm being returned as the iterate.
  • refactor particle_swarm in naming and access functions to avoid this also in the future. To access the whole swarm, one now should use get_manopt_parameter(pss, :Population)

[0.4.47] January 6, 2024

Fixed

  • fixed a bug, where the retraction set in check_Hessian was not passed on to the optional inner check_gradient call, which could lead to unwanted side effects, see #342.

[0.4.46] January 1, 2024

Changed

  • An error is thrown when a line search from LineSearches.jl reports search failure.
  • Changed default stopping criterion in ALM algorithm to mitigate an issue occurring when step size is very small.
  • Default memory length in default ALM subsolver is now capped at manifold dimension.
  • Replaced CI testing on Julia 1.8 with testing on Julia 1.10.

Fixed

  • A bug in LineSearches.jl extension leading to slower convergence.
  • Fixed a bug in L-BFGS related to memory storage, which caused significantly slower convergence.

[0.4.45] December 28, 2023

Added

  • Introduce sub_kwargs and sub_stopping_criterion for trust_regions as noticed in #336

Changed

  • WolfePowellLineSearch, ArmijoLineSearch step sizes now allocate less
  • linesearch_backtrack! is now available
  • Quasi Newton Updates can work in-place of a direction vector as well.
  • Faster safe_indices in L-BFGS.

[0.4.44] December 12, 2023

Formally one could consider this version breaking, since a few functions have been moved, that in earlier versions (0.3.x) have been used in example scripts. These examples are now available again within ManoptExamples.jl, and with their “reappearance” the corresponding costs, gradients, differentials, adjoint differentials, and proximal maps have been moved there as well. This is not considered breaking, since the functions were only used in the old, removed examples. Each and every moved function is still documented. They have been partly renamed, and their documentation and testing has been extended.

Changed

[0.4.43] November 19, 2023

Added

  • vale.sh as a CI to keep track of a consistent documentation

[0.4.42] November 6, 2023

Added

  • add Manopt.JuMP_Optimizer implementing JuMP's solver interface

[0.4.41] November 2, 2023

Changed

  • trust_regions is now more flexible and the sub solver (Steihaug-Toint tCG by default) can now be exchanged.
  • adaptive_regularization_with_cubics is now more flexible as well, where it previously was a bit too much tightened to the Lanczos solver as well.
  • Unified documentation notation and bumped dependencies to use DocumenterCitations 1.3

[0.4.40] October 24, 2023

Added

  • add a --help argument to docs/make.jl to document all available command line arguments
  • add a --exclude-tutorials argument to docs/make.jl. This way, when quarto is not available on a computer, the docs can still be build with the tutorials not being added to the menu such that documenter does not expect them to exist.

Changes

  • Bump dependencies to ManifoldsBase.jl 0.15 and Manifolds.jl 0.9
  • move the ARC CG subsolver to the main package, since TangentSpace is now already available from ManifoldsBase.

[0.4.39] October 9, 2023

Changes

  • also use the pair of a retraction and the inverse retraction (see last update) to perform the relaxation within the Douglas-Rachford algorithm.

[0.4.38] October 8, 2023

Changes

  • avoid allocations when calling get_jacobian! within the Levenberg-Marquard Algorithm.

Fixed

  • Fix a lot of typos in the documentation

[0.4.37] September 28, 2023

Changes

  • add more of the Riemannian Levenberg-Marquard algorithms parameters as keywords, so they can be changed on call
  • generalize the internal reflection of Douglas-Rachford, such that is also works with an arbitrary pair of a reflection and an inverse reflection.

[0.4.36] September 20, 2023

Fixed

  • Fixed a bug that caused non-matrix points and vectors to fail when working with approximate

[0.4.35] September 14, 2023

Added

  • The access to functions of the objective is now unified and encapsulated in proper get_ functions.

[0.4.34] September 02, 2023

Added

  • an ManifoldEuclideanGradientObjective to allow the cost, gradient, and Hessian and other first or second derivative based elements to be Euclidean and converted when needed.
  • a keyword objective_type=:Euclidean for all solvers, that specifies that an Objective shall be created of the new type

[0.4.33] August 24, 2023

Added

  • ConstantStepsize and DecreasingStepsize now have an additional field type::Symbol to assess whether the step-size should be relatively (to the gradient norm) or absolutely constant.

[0.4.32] August 23, 2023

Added

  • The adaptive regularization with cubics (ARC) solver.

[0.4.31] August 14, 2023

Added

  • A :Subsolver keyword in the debug= keyword argument, that activates the new DebugWhenActiveto de/activate subsolver debug from the main solversDebugEvery`.

[0.4.30] August 3, 2023

Changed

  • References in the documentation are now rendered using DocumenterCitations.jl
  • Asymptote export now also accepts a size in pixel instead of its default 4cm size and render can be deactivated setting it to nothing.

[0.4.29] July 12, 2023

Fixed

  • fixed a bug, where cyclic_proximal_point did not work with decorated objectives.

[0.4.28] June 24, 2023

Changed

  • max_stepsize was specialized for FixedRankManifold to follow Matlab Manopt.

[0.4.27] June 15, 2023

Added

  • The AdaptiveWNGrad stepsize is available as a new stepsize functor.

Fixed

  • Levenberg-Marquardt now possesses its parameters initial_residual_values and initial_jacobian_f also as keyword arguments, such that their default initialisations can be adapted, if necessary

[0.4.26] June 11, 2023

Added

  • simplify usage of gradient descent as sub solver in the DoC solvers.
  • add a get_state function
  • document indicates_convergence.

[0.4.25] June 5, 2023

Fixed

  • Fixes an allocation bug in the difference of convex algorithm

[0.4.24] June 4, 2023

Added

  • another workflow that deletes old PR renderings from the docs to keep them smaller in overall size.

Changes

  • bump dependencies since the extension between Manifolds.jl and ManifoldsDiff.jl has been moved to Manifolds.jl

[0.4.23] June 4, 2023

Added

  • More details on the Count and Cache tutorial

Changed

  • loosen constraints slightly

[0.4.22] May 31, 2023

Added

  • A tutorial on how to implement a solver

[0.4.21] May 22, 2023

Added

  • A ManifoldCacheObjective as a decorator for objectives to cache results of calls, using LRU Caches as a weak dependency. For now this works with cost and gradient evaluations
  • A ManifoldCountObjective as a decorator for objectives to enable counting of calls to for example the cost and the gradient
  • adds a return_objective keyword, that switches the return of a solver to a tuple (o, s), where o is the (possibly decorated) objective, and s is the “classical” solver return (state or point). This way the counted values can be accessed and the cache can be reused.
  • change solvers on the mid level (form solver(M, objective, p)) to also accept decorated objectives

Changed

  • Switch all Requires weak dependencies to actual weak dependencies starting in Julia 1.9

[0.4.20] May 11, 2023

Changed

  • the default tolerances for the numerical check_ functions were loosened a bit, such that check_vector can also be changed in its tolerances.

[0.4.19] May 7, 2023

Added

  • the sub solver for trust_regions is now customizable and can now be exchanged.

Changed

  • slightly changed the definitions of the solver states for ALM and EPM to be type stable

[0.4.18] May 4, 2023

Added

  • A function check_Hessian(M, f, grad_f, Hess_f) to numerically verify the (Riemannian) Hessian of a function f

[0.4.17] April 28, 2023

Added

  • A new interface of the form alg(M, objective, p0) to allow to reuse objectives without creating AbstractManoptSolverStates and calling solve!. This especially still allows for any decoration of the objective and/or the state using debug=, or record=.

Changed

  • All solvers now have the initial point p as an optional parameter making it more accessible to first time users, gradient_descent(M, f, grad_f) is equivalent to gradient_descent(M, f, grad_f, rand(M))

Fixed

  • Unified the framework to work on manifold where points are represented by numbers for several solvers

[0.4.16] April 18, 2023

Fixed

  • the inner products used in truncated_gradient_descent now also work thoroughly on complex matrix manifolds

[0.4.15] April 13, 2023

Changed

  • trust_regions(M, f, grad_f, hess_f, p) now has the Hessian hess_f as well as the start point p0 as an optional parameter and approximate it otherwise.
  • trust_regions!(M, f, grad_f, hess_f, p) has the Hessian as an optional parameter and approximate it otherwise.

Removed

  • support for ManifoldsBase.jl 0.13.x, since with the definition of copy(M,p::Number), in 0.14.4, that one is used instead of defining it ourselves.

[0.4.14] April 06, 2023

Changed

  • particle_swarm now uses much more in-place operations

Fixed

  • particle_swarm used quite a few deepcopy(p) commands still, which were replaced by copy(M, p)

[0.4.13] April 09, 2023

Added

  • get_message to obtain messages from sub steps of a solver
  • DebugMessages to display the new messages in debug
  • safeguards in Armijo line search and L-BFGS against numerical over- and underflow that report in messages

[0.4.12] April 4, 2023

Added

[0.4.11] March 27, 2023

Changed

  • adapt tolerances in tests to the speed/accuracy optimized distance on the sphere in Manifolds.jl (part II)

[0.4.10] March 26, 2023

Changed

  • adapt tolerances in tests to the speed/accuracy optimized distance on the sphere in Manifolds.jl

[0.4.9] March 3, 2023

Added

[0.4.8] February 21, 2023

Added

  • a status_summary that displays the main parameters within several structures of Manopt, most prominently a solver state

Changed

  • Improved storage performance by introducing separate named tuples for points and vectors
  • changed the show methods of AbstractManoptSolverStates to display their `state_summary
  • Move tutorials to be rendered with Quarto into the documentation.

[0.4.7] February 14, 2023

Changed

  • Bump [compat] entry of ManifoldDiff to also include 0.3

[0.4.6] February 3, 2023

Fixed

  • Fixed a few stopping criteria even indicated to stop before the algorithm started.

[0.4.5] January 24, 2023

Changed

  • the new default functions that include p are used where possible
  • a first step towards faster storage handling

[0.4.4] January 20, 2023

Added

  • Introduce ConjugateGradientBealeRestart to allow CG restarts using Beale‘s rule

Fixed

  • fix a type in HestenesStiefelCoefficient

[0.4.3] January 17, 2023

Fixed

  • the CG coefficient β can now be complex
  • fix a bug in grad_distance

[0.4.2] January 16, 2023

Changed

  • the usage of inner in line search methods, such that they work well with complex manifolds as well

[0.4.1] January 15, 2023

Fixed

  • a max_stepsize per manifold to avoid leaving the injectivity radius, which it also defaults to

[0.4.0] January 10, 2023

Added

  • Dependency on ManifoldDiff.jl and a start of moving actual derivatives, differentials, and gradients there.
  • AbstractManifoldObjective to store the objective within the AbstractManoptProblem
  • Introduce a CostGrad structure to store a function that computes the cost and gradient within one function.
  • started a changelog.md to thoroughly keep track of changes

Changed

  • AbstractManoptProblem replaces Problem
  • the problem now contains a
  • AbstractManoptSolverState replaces Options
  • random_point(M) is replaced by rand(M) from `ManifoldsBase.jl
  • random_tangent(M, p) is replaced by rand(M; vector_at=p)
diff --git a/dev/contributing/index.html b/dev/contributing/index.html index 91d34584be..81f32f244d 100644 --- a/dev/contributing/index.html +++ b/dev/contributing/index.html @@ -1,2 +1,2 @@ -Contributing to Manopt.jl · Manopt.jl

Contributing to Manopt.jl

First, thanks for taking the time to contribute. Any contribution is appreciated and welcome.

The following is a set of guidelines to Manopt.jl.

Table of contents

I just have a question

The developer can most easily be reached in the Julia Slack channel #manifolds. You can apply for the Julia Slack workspace here if you haven't joined yet. You can also ask your question on discourse.julialang.org.

How can I file an issue?

If you found a bug or want to propose a feature, please open an issue in within the GitHub repository.

How can I contribute?

Add a missing method

There is still a lot of methods for within the optimization framework of Manopt.jl, may it be functions, gradients, differentials, proximal maps, step size rules or stopping criteria. If you notice a method missing and can contribute an implementation, please do so, and the maintainers try help with the necessary details. Even providing a single new method is a good contribution.

Provide a new algorithm

A main contribution you can provide is another algorithm that is not yet included in the package. An algorithm is always based on a concrete type of a AbstractManoptProblem storing the main information of the task and a concrete type of an AbstractManoptSolverState storing all information that needs to be known to the solver in general. The actual algorithm is split into an initialization phase, see initialize_solver!, and the implementation of the ith step of the solver itself, see before the iterative procedure, see step_solver!. For these two functions, it would be great if a new algorithm uses functions from the ManifoldsBase.jl interface as generically as possible. For example, if possible use retract!(M,q,p,X) in favor of exp!(M,q,p,X) to perform a step starting in p in direction X (in place of q), since the exponential map might be too expensive to evaluate or might not be available on a certain manifold. See Retractions and inverse retractions for more details. Further, if possible, prefer retract!(M,q,p,X) in favor of retract(M,p,X), since a computation in place of a suitable variable q reduces memory allocations.

Usually, the methods implemented in Manopt.jl also have a high-level interface, that is easier to call, creates the necessary problem and options structure and calls the solver.

The two technical functions initialize_solver! and step_solver! should be documented with technical details, while the high level interface should usually provide a general description and some literature references to the algorithm at hand.

Provide a new example

Example problems are available at ManoptExamples.jl, where also their reproducible Quarto-Markdown files are stored.

Code style

Try to follow the documentation guidelines from the Julia documentation as well as Blue Style. Run JuliaFormatter.jl on the repository in the way set in the .JuliaFormatter.toml file, which enforces a number of conventions consistent with the Blue Style. Furthermore vale is run on both Markdown and code files, affecting documentation and source code comments

Please follow a few internal conventions:

  • It is preferred that the AbstractManoptProblem's struct contains information about the general structure of the problem.
  • Any implemented function should be accompanied by its mathematical formulae if a closed form exists.
  • AbstractManoptProblem and helping functions are stored within the plan/ folder and sorted by properties of the problem and/or solver at hand.
  • the solver state is usually stored with the solver itself
  • Within the source code of one algorithm, following the state, the high level interface should be next, then the initialization, then the step.
  • Otherwise an alphabetical order of functions is preferable.
  • The preceding implies that the mutating variant of a function follows the non-mutating variant.
  • There should be no dangling = signs.
  • Always add a newline between things of different types (struct/method/const).
  • Always add a newline between methods for different functions (including mutating/nonmutating variants).
  • Prefer to have no newline between methods for the same function; when reasonable, merge the documentation strings.
  • All import/using/include should be in the main module file.

Concerning documentation

  • if possible provide both mathematical formulae and literature references using DocumenterCitations.jl and BibTeX where possible
  • Always document all input variables and keyword arguments

If you implement an algorithm with a certain numerical example in mind, it would be great, if this could be added to the ManoptExamples.jl package as well.

+Contributing to Manopt.jl · Manopt.jl

Contributing to Manopt.jl

First, thanks for taking the time to contribute. Any contribution is appreciated and welcome.

The following is a set of guidelines to Manopt.jl.

Table of contents

I just have a question

The developer can most easily be reached in the Julia Slack channel #manifolds. You can apply for the Julia Slack workspace here if you haven't joined yet. You can also ask your question on discourse.julialang.org.

How can I file an issue?

If you found a bug or want to propose a feature, please open an issue in within the GitHub repository.

How can I contribute?

Add a missing method

There is still a lot of methods for within the optimization framework of Manopt.jl, may it be functions, gradients, differentials, proximal maps, step size rules or stopping criteria. If you notice a method missing and can contribute an implementation, please do so, and the maintainers try help with the necessary details. Even providing a single new method is a good contribution.

Provide a new algorithm

A main contribution you can provide is another algorithm that is not yet included in the package. An algorithm is always based on a concrete type of a AbstractManoptProblem storing the main information of the task and a concrete type of an AbstractManoptSolverState storing all information that needs to be known to the solver in general. The actual algorithm is split into an initialization phase, see initialize_solver!, and the implementation of the ith step of the solver itself, see before the iterative procedure, see step_solver!. For these two functions, it would be great if a new algorithm uses functions from the ManifoldsBase.jl interface as generically as possible. For example, if possible use retract!(M,q,p,X) in favor of exp!(M,q,p,X) to perform a step starting in p in direction X (in place of q), since the exponential map might be too expensive to evaluate or might not be available on a certain manifold. See Retractions and inverse retractions for more details. Further, if possible, prefer retract!(M,q,p,X) in favor of retract(M,p,X), since a computation in place of a suitable variable q reduces memory allocations.

Usually, the methods implemented in Manopt.jl also have a high-level interface, that is easier to call, creates the necessary problem and options structure and calls the solver.

The two technical functions initialize_solver! and step_solver! should be documented with technical details, while the high level interface should usually provide a general description and some literature references to the algorithm at hand.

Provide a new example

Example problems are available at ManoptExamples.jl, where also their reproducible Quarto-Markdown files are stored.

Code style

Try to follow the documentation guidelines from the Julia documentation as well as Blue Style. Run JuliaFormatter.jl on the repository in the way set in the .JuliaFormatter.toml file, which enforces a number of conventions consistent with the Blue Style. Furthermore vale is run on both Markdown and code files, affecting documentation and source code comments

Please follow a few internal conventions:

  • It is preferred that the AbstractManoptProblem's struct contains information about the general structure of the problem.
  • Any implemented function should be accompanied by its mathematical formulae if a closed form exists.
  • AbstractManoptProblem and helping functions are stored within the plan/ folder and sorted by properties of the problem and/or solver at hand.
  • the solver state is usually stored with the solver itself
  • Within the source code of one algorithm, following the state, the high level interface should be next, then the initialization, then the step.
  • Otherwise an alphabetical order of functions is preferable.
  • The preceding implies that the mutating variant of a function follows the non-mutating variant.
  • There should be no dangling = signs.
  • Always add a newline between things of different types (struct/method/const).
  • Always add a newline between methods for different functions (including mutating/nonmutating variants).
  • Prefer to have no newline between methods for the same function; when reasonable, merge the documentation strings.
  • All import/using/include should be in the main module file.

Concerning documentation

  • if possible provide both mathematical formulae and literature references using DocumenterCitations.jl and BibTeX where possible
  • Always document all input variables and keyword arguments

If you implement an algorithm with a certain numerical example in mind, it would be great, if this could be added to the ManoptExamples.jl package as well.

diff --git a/dev/extensions/index.html b/dev/extensions/index.html index d53d3888e7..ab053b62b3 100644 --- a/dev/extensions/index.html +++ b/dev/extensions/index.html @@ -69,23 +69,23 @@ linesearch; retraction_method=ExponentialRetraction(), vector_transport_method=ParallelTransport(), -)

Wrap linesearch (for example HagerZhang or MoreThuente). The initial step selection from Linesearches.jl is not yet supported and the value 1.0 is used.

Keyword Arguments

source

Manifolds.jl

Loading Manifolds.jl introduces the following additional functions

Manopt.max_stepsizeMethod
max_stepsize(M::FixedRankMatrices, p)

Return a reasonable guess of maximum step size on FixedRankMatrices following the choice of typical distance in Matlab Manopt, the dimension of M. See this note

source
Manopt.max_stepsizeMethod
max_stepsize(M::Hyperrectangle, p)

The default maximum stepsize for Hyperrectangle manifold with corners is maximum of distances from p to each boundary.

source
Manopt.max_stepsizeMethod
max_stepsize(M::TangentBundle, p)

Tangent bundle has injectivity radius of either infinity (for flat manifolds) or 0 (for non-flat manifolds). This makes a guess of what a reasonable maximum stepsize on a tangent bundle might be.

source
ManifoldsBase.mid_pointFunction
mid_point(M, p, q, x)
-mid_point!(M, y, p, q, x)

Compute the mid point between p and q. If there is more than one mid point of (not necessarily minimizing) geodesics (for example on the sphere), the one nearest to x is returned (in place of y).

source

Internally, Manopt.jl provides the two additional functions to choose some Euclidean space when needed as

Manopt.RnFunction
Rn(args; kwargs...)
-Rn(s::Symbol=:Manifolds, args; kwargs...)

A small internal helper function to choose a Euclidean space. By default, this uses the DefaultManifold unless you load a more advanced Euclidean space like Euclidean from Manifolds.jl

source
Manopt.Rn_defaultFunction
Rn_default()

Specify a default value to dispatch Rn on. This default is set to Manifolds, indicating, that when this package is loded, it is the preferred package to ask for a vector space space.

The default within Manopt.jl is to use the DefaultManifold from ManifoldsBase.jl. If you load Manifolds.jl this switches to using Euclidan.

source

JuMP.jl

Manopt can be used using the JuMP.jl interface. The manifold is provided in the @variable macro. Note that until now, only variables (points on manifolds) are supported, that are arrays, especially structs do not yet work. The algebraic expression of the objective function is specified in the @objective macro. The descent_state_type attribute specifies the solver.

using JuMP, Manopt, Manifolds
+)

Wrap linesearch (for example HagerZhang or MoreThuente). The initial step selection from Linesearches.jl is not yet supported and the value 1.0 is used.

Keyword Arguments

source

Manifolds.jl

Loading Manifolds.jl introduces the following additional functions

Manopt.max_stepsizeMethod
max_stepsize(M::FixedRankMatrices, p)

Return a reasonable guess of maximum step size on FixedRankMatrices following the choice of typical distance in Matlab Manopt, the dimension of M. See this note

source
Manopt.max_stepsizeMethod
max_stepsize(M::Hyperrectangle, p)

The default maximum stepsize for Hyperrectangle manifold with corners is maximum of distances from p to each boundary.

source
Manopt.max_stepsizeMethod
max_stepsize(M::TangentBundle, p)

Tangent bundle has injectivity radius of either infinity (for flat manifolds) or 0 (for non-flat manifolds). This makes a guess of what a reasonable maximum stepsize on a tangent bundle might be.

source
ManifoldsBase.mid_pointFunction
mid_point(M, p, q, x)
+mid_point!(M, y, p, q, x)

Compute the mid point between p and q. If there is more than one mid point of (not necessarily minimizing) geodesics (for example on the sphere), the one nearest to x is returned (in place of y).

source

Internally, Manopt.jl provides the two additional functions to choose some Euclidean space when needed as

Manopt.RnFunction
Rn(args; kwargs...)
+Rn(s::Symbol=:Manifolds, args; kwargs...)

A small internal helper function to choose a Euclidean space. By default, this uses the DefaultManifold unless you load a more advanced Euclidean space like Euclidean from Manifolds.jl

source
Manopt.Rn_defaultFunction
Rn_default()

Specify a default value to dispatch Rn on. This default is set to Manifolds, indicating, that when this package is loded, it is the preferred package to ask for a vector space space.

The default within Manopt.jl is to use the DefaultManifold from ManifoldsBase.jl. If you load Manifolds.jl this switches to using Euclidan.

source

JuMP.jl

Manopt can be used using the JuMP.jl interface. The manifold is provided in the @variable macro. Note that until now, only variables (points on manifolds) are supported, that are arrays, especially structs do not yet work. The algebraic expression of the objective function is specified in the @objective macro. The descent_state_type attribute specifies the solver.

using JuMP, Manopt, Manifolds
 model = Model(Manopt.Optimizer)
 # Change the solver with this option, `GradientDescentState` is the default
 set_attribute("descent_state_type", GradientDescentState)
 @variable(model, U[1:2, 1:2] in Stiefel(2, 2), start = 1.0)
 @objective(model, Min, sum((A - U) .^ 2))
 optimize!(model)
-solution_summary(model)

Interface functions

Manopt.JuMP_ArrayShapeType
struct ArrayShape{N} <: JuMP.AbstractShape

Shape of an Array{T,N} of size size.

source
Manopt.JuMP_VectorizedManifoldType
struct VectorizedManifold{M} <: MOI.AbstractVectorSet
+solution_summary(model)

Interface functions

Manopt.JuMP_VectorizedManifoldType
struct VectorizedManifold{M} <: MOI.AbstractVectorSet
     manifold::M
-end

Representation of points of manifold as a vector of R^n where n is MOI.dimension(VectorizedManifold(manifold)).

source
MathOptInterface.dimensionMethod
MOI.dimension(set::VectorizedManifold)

Return the representation side of points on the (vectorized in representation) manifold. As the MOI variables are real, this means if the representation_size yields (in product) n, this refers to the vectorized point / tangent vector from (a subset of $ℝ^n$).

source
Manopt.JuMP_OptimizerType
Manopt.JuMP_Optimizer()

Creates a new optimizer object for the MathOptInterface (MOI). An alias Manopt.JuMP_Optimizer is defined for convenience.

The minimization of a function f(X) of an array X[1:n1,1:n2,...] over a manifold M starting at X0, can be modeled as follows:

using JuMP
+end

Representation of points of manifold as a vector of R^n where n is MOI.dimension(VectorizedManifold(manifold)).

source
MathOptInterface.dimensionMethod
MOI.dimension(set::VectorizedManifold)

Return the representation side of points on the (vectorized in representation) manifold. As the MOI variables are real, this means if the representation_size yields (in product) n, this refers to the vectorized point / tangent vector from (a subset of $ℝ^n$).

source
Manopt.JuMP_OptimizerType
Manopt.JuMP_Optimizer()

Creates a new optimizer object for the MathOptInterface (MOI). An alias Manopt.JuMP_Optimizer is defined for convenience.

The minimization of a function f(X) of an array X[1:n1,1:n2,...] over a manifold M starting at X0, can be modeled as follows:

using JuMP
 model = Model(Manopt.JuMP_Optimizer)
 @variable(model, X[i1=1:n1,i2=1:n2,...] in M, start = X0[i1,i2,...])
-@objective(model, Min, f(X))

The optimizer assumes that M has a Array shape described by ManifoldsBase.representation_size.

source
MathOptInterface.supportsMethod
MOI.supports(::Optimizer, attr::MOI.RawOptimizerAttribute)

Return a Bool indicating whether attr.name is a valid option name for Manopt.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, attr::MOI.RawOptimizerAttribute)

Return last value set by MOI.set(model, attr, value).

source
MathOptInterface.setMethod
MOI.get(model::Optimizer, attr::MOI.RawOptimizerAttribute)

Set the value for the keyword argument attr.name to give for the constructor model.options[DESCENT_STATE_TYPE].

source
MathOptInterface.copy_toMethod
MOI.copy_to(dest::Optimizer, src::MOI.ModelLike)

Because supports_incremental_interface(dest) is true, this simply uses MOI.Utilities.default_copy_to and copies the variables with MOI.add_constrained_variables and the objective sense with MOI.set.

source
MathOptInterface.supportsMethod
MOI.supports(::Manopt.JuMP_Optimizer, attr::MOI.RawOptimizerAttribute)

Return true indicating that Manopt.JuMP_Optimizer supports starting values for the variables.

source
MathOptInterface.setMethod
function MOI.set(
+@objective(model, Min, f(X))

The optimizer assumes that M has a Array shape described by ManifoldsBase.representation_size.

source
MathOptInterface.supportsMethod
MOI.supports(::Optimizer, attr::MOI.RawOptimizerAttribute)

Return a Bool indicating whether attr.name is a valid option name for Manopt.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, attr::MOI.RawOptimizerAttribute)

Return last value set by MOI.set(model, attr, value).

source
MathOptInterface.setMethod
MOI.get(model::Optimizer, attr::MOI.RawOptimizerAttribute)

Set the value for the keyword argument attr.name to give for the constructor model.options[DESCENT_STATE_TYPE].

source
MathOptInterface.copy_toMethod
MOI.copy_to(dest::Optimizer, src::MOI.ModelLike)

Because supports_incremental_interface(dest) is true, this simply uses MOI.Utilities.default_copy_to and copies the variables with MOI.add_constrained_variables and the objective sense with MOI.set.

source
MathOptInterface.supportsMethod
MOI.supports(::Manopt.JuMP_Optimizer, attr::MOI.RawOptimizerAttribute)

Return true indicating that Manopt.JuMP_Optimizer supports starting values for the variables.

source
MathOptInterface.setMethod
function MOI.set(
     model::Optimizer,
     ::MOI.VariablePrimalStart,
     vi::MOI.VariableIndex,
     value::Union{Real,Nothing},
-)

Set the starting value of the variable of index vi to value. Note that if value is nothing then it essentially unset any previous starting values set and hence MOI.optimize! unless another starting value is set.

source
MathOptInterface.setMethod
MOI.set(model::Optimizer, ::MOI.ObjectiveSense, sense::MOI.OptimizationSense)

Modify the objective sense to either MOI.MAX_SENSE, MOI.MIN_SENSE or MOI.FEASIBILITY_SENSE.

source
MathOptInterface.setMethod
MOI.set(model::Optimizer, ::MOI.ObjectiveFunction{F}, func::F) where {F}

Set the objective function as func for model.

source
MathOptInterface.supportsMethod
MOI.supports(::Optimizer, ::Union{MOI.ObjectiveSense,MOI.ObjectiveFunction})

Return true indicating that Optimizer supports being set the objective sense (that is, min, max or feasibility) and the objective function.

source
JuMP.build_variableMethod
JuMP.build_variable(::Function, func, m::ManifoldsBase.AbstractManifold)

Build a JuMP.VariablesConstrainedOnCreation object containing variables and the Manopt.JuMP_VectorizedManifold in which they should belong as well as the shape that can be used to go from the vectorized MOI representation to the shape of the manifold, that is, Manopt.JuMP_ArrayShape.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, ::MOI.ResultCount)

Return 0 if optimize! hasn't been called yet and 1 otherwise indicating that one solution is available.

source
MathOptInterface.getMethod
MOI.get(::Optimizer, ::MOI.SolverName)

Return the name of the Optimizer with the value of the descent_state_type option.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, attr::MOI.ObjectiveValue)

Return the value of the objective function evaluated at the solution.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, ::MOI.PrimalStatus)

Return MOI.NO_SOLUTION if optimize! hasn't been called yet and MOI.FEASIBLE_POINT otherwise indicating that a solution is available to query with MOI.VariablePrimalStart.

source
MathOptInterface.getMethod
MOI.get(::Optimizer, ::MOI.DualStatus)

Returns MOI.NO_SOLUTION indicating that there is no dual solution available.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, ::MOI.ResultCount)

Return MOI.OPTIMIZE_NOT_CALLED if optimize! hasn't been called yet and MOI.LOCALLY_SOLVED otherwise indicating that the solver has solved the problem to local optimality the value of MOI.RawStatusString for more details on why the solver stopped.

source
MathOptInterface.getMethod
MOI.get(::Optimizer, ::MOI.SolverVersion)

Return the version of the Manopt solver, it corresponds to the version of Manopt.jl.

source
MathOptInterface.getMethod
MOI.set(model::Optimizer, ::MOI.ObjectiveSense, sense::MOI.OptimizationSense)

Return the objective sense, defaults to MOI.FEASIBILITY_SENSE if no sense has already been set.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, attr::MOI.VariablePrimal, vi::MOI.VariableIndex)

Return the value of the solution for the variable of index vi.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, ::MOI.RawStatusString)

Return a String containing Manopt.get_reason without the ending newline character.

source
+)

Set the starting value of the variable of index vi to value. Note that if value is nothing then it essentially unset any previous starting values set and hence MOI.optimize! unless another starting value is set.

source
MathOptInterface.setMethod
MOI.set(model::Optimizer, ::MOI.ObjectiveSense, sense::MOI.OptimizationSense)

Modify the objective sense to either MOI.MAX_SENSE, MOI.MIN_SENSE or MOI.FEASIBILITY_SENSE.

source
MathOptInterface.setMethod
MOI.set(model::Optimizer, ::MOI.ObjectiveFunction{F}, func::F) where {F}

Set the objective function as func for model.

source
MathOptInterface.supportsMethod
MOI.supports(::Optimizer, ::Union{MOI.ObjectiveSense,MOI.ObjectiveFunction})

Return true indicating that Optimizer supports being set the objective sense (that is, min, max or feasibility) and the objective function.

source
JuMP.build_variableMethod
JuMP.build_variable(::Function, func, m::ManifoldsBase.AbstractManifold)

Build a JuMP.VariablesConstrainedOnCreation object containing variables and the Manopt.JuMP_VectorizedManifold in which they should belong as well as the shape that can be used to go from the vectorized MOI representation to the shape of the manifold, that is, Manopt.JuMP_ArrayShape.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, ::MOI.ResultCount)

Return 0 if optimize! hasn't been called yet and 1 otherwise indicating that one solution is available.

source
MathOptInterface.getMethod
MOI.get(::Optimizer, ::MOI.SolverName)

Return the name of the Optimizer with the value of the descent_state_type option.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, attr::MOI.ObjectiveValue)

Return the value of the objective function evaluated at the solution.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, ::MOI.PrimalStatus)

Return MOI.NO_SOLUTION if optimize! hasn't been called yet and MOI.FEASIBLE_POINT otherwise indicating that a solution is available to query with MOI.VariablePrimalStart.

source
MathOptInterface.getMethod
MOI.get(::Optimizer, ::MOI.DualStatus)

Returns MOI.NO_SOLUTION indicating that there is no dual solution available.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, ::MOI.ResultCount)

Return MOI.OPTIMIZE_NOT_CALLED if optimize! hasn't been called yet and MOI.LOCALLY_SOLVED otherwise indicating that the solver has solved the problem to local optimality the value of MOI.RawStatusString for more details on why the solver stopped.

source
MathOptInterface.getMethod
MOI.get(::Optimizer, ::MOI.SolverVersion)

Return the version of the Manopt solver, it corresponds to the version of Manopt.jl.

source
MathOptInterface.getMethod
MOI.set(model::Optimizer, ::MOI.ObjectiveSense, sense::MOI.OptimizationSense)

Return the objective sense, defaults to MOI.FEASIBILITY_SENSE if no sense has already been set.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, attr::MOI.VariablePrimal, vi::MOI.VariableIndex)

Return the value of the solution for the variable of index vi.

source
MathOptInterface.getMethod
MOI.get(model::Optimizer, ::MOI.RawStatusString)

Return a String containing Manopt.get_reason without the ending newline character.

source
diff --git a/dev/helpers/checks/index.html b/dev/helpers/checks/index.html index 0363b55877..849431bd6d 100644 --- a/dev/helpers/checks/index.html +++ b/dev/helpers/checks/index.html @@ -1,7 +1,7 @@ -Checks · Manopt.jl

Verifying gradients and Hessians

If you have computed a gradient or differential and you are not sure whether it is correct.

Manopt.check_HessianFunction
check_Hessian(M, f, grad_f, Hess_f, p=rand(M), X=rand(M; vector_at=p), Y=rand(M, vector_at=p); kwargs...)

Verify numerically whether the Hessian Hess_f(M,p, X) of f(M,p) is correct.

For this either a second-order retraction or a critical point $p$ of f is required. The approximation is then

\[f(\operatorname{retr}_p(tX)) = f(p) + t⟨\operatorname{grad} f(p), X⟩ + \frac{t^2}{2}⟨\operatorname{Hess}f(p)[X], X⟩ + \mathcal O(t^3)\]

or in other words, that the error between the function $f$ and its second order Taylor behaves in error $\mathcal O(t^3)$, which indicates that the Hessian is correct, cf. also [Bou23, Section 6.8].

Note that if the errors are below the given tolerance and the method is exact, no plot is generated.

Keyword arguments

  • check_grad=true: verify that $\operatorname{grad}f(p) ∈ T_{p}\mathcal M$.
  • check_linearity=true: verify that the Hessian is linear, see is_Hessian_linear using a, b, X, and Y
  • check_symmetry=true: verify that the Hessian is symmetric, see is_Hessian_symmetric
  • check_vector=false: verify that \operatorname{Hess} f(p)[X] ∈ T_{p}\mathcal Musingis_vector`.
  • mode=:Default: specify the mode for the verification; the default assumption is, that the retraction provided is of second order. Otherwise one can also verify the Hessian if the point p is a critical point. THen set the mode to :CritalPoint to use gradient_descent to find a critical point. Note: this requires (and evaluates) new tangent vectors X and Y
  • atol, rtol: (same defaults as isapprox) tolerances that are passed down to all checks
  • a, b two real values to verify linearity of the Hessian (if check_linearity=true)
  • N=101: number of points to verify within the log_range default range $[10^{-8},10^{0}]$
  • exactness_tol=1e-12: if all errors are below this tolerance, the verification is considered to be exact
  • io=nothing: provide an IO to print the result to
  • gradient=grad_f(M, p): instead of the gradient function you can also provide the gradient at p directly
  • Hessian=Hess_f(M, p, X): instead of the Hessian function you can provide the result of $\operatorname{Hess} f(p)[X]$ directly. Note that evaluations of the Hessian might still be necessary for checking linearity and symmetry and/or when using :CriticalPoint mode.
  • limits=(1e-8,1): specify the limits in the log_range
  • log_range=range(limits[1], limits[2]; length=N): specify the range of points (in log scale) to sample the Hessian line
  • N=101: number of points to use within the log_range default range $[10^{-8},10^{0}]$
  • plot=false: whether to plot the resulting verification (requires Plots.jl to be loaded). The plot is in log-log-scale. This is returned and can then also be saved.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • slope_tol=0.1: tolerance for the slope (global) of the approximation
  • error=:none: how to handle errors, possible values: :error, :info, :warn
  • window=nothing: specify window sizes within the log_range that are used for the slope estimation. the default is, to use all window sizes 2:N.

The kwargs... are also passed down to the check_vector and the check_gradient call, such that tolerances can easily be set.

While check_vector is also passed to the inner call to check_gradient as well as the retraction_method, this inner check_gradient is meant to be just for inner verification, so it does not throw an error nor produce a plot itself.

source
Manopt.check_differentialFunction
check_differential(M, F, dF, p=rand(M), X=rand(M; vector_at=p); kwargs...)

Check numerically whether the differential dF(M,p,X) of F(M,p) is correct.

This implements the method described in [Bou23, Section 4.8].

Note that if the errors are below the given tolerance and the method is exact, no plot is generated,

Keyword arguments

  • exactness_tol=1e-12: if all errors are below this tolerance, the differential is considered to be exact
  • io=nothing: provide an IO to print the result to
  • limits=(1e-8,1): specify the limits in the log_range
  • log_range=range(limits[1], limits[2]; length=N): specify the range of points (in log scale) to sample the differential line
  • N=101: number of points to verify within the log_range default range $[10^{-8},10^{0}]$
  • name="differential": name to display in the plot
  • plot=false: whether to plot the result (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • slope_tol=0.1: tolerance for the slope (global) of the approximation
  • throw_error=false: throw an error message if the differential is wrong
  • window=nothing: specify window sizes within the log_range that are used for the slope estimation. The default is, to use all window sizes 2:N.
source
Manopt.check_gradientFunction
check_gradient(M, f, grad_f, p=rand(M), X=rand(M; vector_at=p); kwargs...)

Verify numerically whether the gradient grad_f(M,p) of f(M,p) is correct, that is whether

\[f(\operatorname{retr}_p(tX)) = f(p) + t⟨\operatorname{grad} f(p), X⟩ + \mathcal O(t^2)\]

or in other words, that the error between the function $f$ and its first order Taylor behaves in error $\mathcal O(t^2)$, which indicates that the gradient is correct, cf. also [Bou23, Section 4.8].

Note that if the errors are below the given tolerance and the method is exact, no plot is generated.

Keyword arguments

  • check_vector=true: verify that $\operatorname{grad}f(p) ∈ T_{p}\mathcal M$ using is_vector.
  • exactness_tol=1e-12: if all errors are below this tolerance, the gradient is considered to be exact
  • io=nothing: provide an IO to print the result to
  • gradient=grad_f(M, p): instead of the gradient function you can also provide the gradient at p directly
  • limits=(1e-8,1): specify the limits in the log_range
  • log_range=range(limits[1], limits[2]; length=N):
    • specify the range of points (in log scale) to sample the gradient line
  • N=101: number of points to verify within the log_range default range $[10^{-8},10^{0}]$
  • plot=false: whether to plot the result (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • slope_tol=0.1: tolerance for the slope (global) of the approximation
  • atol=:none`:

aults as=nothing: hat are passed down toisvectorifcheckvectoris set totrue`

  • error=:none: how to handle errors, possible values: :error, :info, :warn
  • window=nothing: specify window sizes within the log_range that are used for the slope estimation. the default is, to use all window sizes 2:N.

The remaining keyword arguments are also passed down to the check_vector call, such that tolerances can easily be set.

source
Manopt.is_Hessian_linearFunction
is_Hessian_linear(M, Hess_f, p,
+Checks · Manopt.jl

Verifying gradients and Hessians

If you have computed a gradient or differential and you are not sure whether it is correct.

Manopt.check_HessianFunction
check_Hessian(M, f, grad_f, Hess_f, p=rand(M), X=rand(M; vector_at=p), Y=rand(M, vector_at=p); kwargs...)

Verify numerically whether the Hessian Hess_f(M,p, X) of f(M,p) is correct.

For this either a second-order retraction or a critical point $p$ of f is required. The approximation is then

\[f(\operatorname{retr}_p(tX)) = f(p) + t⟨\operatorname{grad} f(p), X⟩ + \frac{t^2}{2}⟨\operatorname{Hess}f(p)[X], X⟩ + \mathcal O(t^3)\]

or in other words, that the error between the function $f$ and its second order Taylor behaves in error $\mathcal O(t^3)$, which indicates that the Hessian is correct, cf. also [Bou23, Section 6.8].

Note that if the errors are below the given tolerance and the method is exact, no plot is generated.

Keyword arguments

  • check_grad=true: verify that $\operatorname{grad}f(p) ∈ T_{p}\mathcal M$.
  • check_linearity=true: verify that the Hessian is linear, see is_Hessian_linear using a, b, X, and Y
  • check_symmetry=true: verify that the Hessian is symmetric, see is_Hessian_symmetric
  • check_vector=false: verify that \operatorname{Hess} f(p)[X] ∈ T_{p}\mathcal Musingis_vector`.
  • mode=:Default: specify the mode for the verification; the default assumption is, that the retraction provided is of second order. Otherwise one can also verify the Hessian if the point p is a critical point. THen set the mode to :CritalPoint to use gradient_descent to find a critical point. Note: this requires (and evaluates) new tangent vectors X and Y
  • atol, rtol: (same defaults as isapprox) tolerances that are passed down to all checks
  • a, b two real values to verify linearity of the Hessian (if check_linearity=true)
  • N=101: number of points to verify within the log_range default range $[10^{-8},10^{0}]$
  • exactness_tol=1e-12: if all errors are below this tolerance, the verification is considered to be exact
  • io=nothing: provide an IO to print the result to
  • gradient=grad_f(M, p): instead of the gradient function you can also provide the gradient at p directly
  • Hessian=Hess_f(M, p, X): instead of the Hessian function you can provide the result of $\operatorname{Hess} f(p)[X]$ directly. Note that evaluations of the Hessian might still be necessary for checking linearity and symmetry and/or when using :CriticalPoint mode.
  • limits=(1e-8,1): specify the limits in the log_range
  • log_range=range(limits[1], limits[2]; length=N): specify the range of points (in log scale) to sample the Hessian line
  • N=101: number of points to use within the log_range default range $[10^{-8},10^{0}]$
  • plot=false: whether to plot the resulting verification (requires Plots.jl to be loaded). The plot is in log-log-scale. This is returned and can then also be saved.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • slope_tol=0.1: tolerance for the slope (global) of the approximation
  • error=:none: how to handle errors, possible values: :error, :info, :warn
  • window=nothing: specify window sizes within the log_range that are used for the slope estimation. the default is, to use all window sizes 2:N.

The kwargs... are also passed down to the check_vector and the check_gradient call, such that tolerances can easily be set.

While check_vector is also passed to the inner call to check_gradient as well as the retraction_method, this inner check_gradient is meant to be just for inner verification, so it does not throw an error nor produce a plot itself.

source
Manopt.check_differentialFunction
check_differential(M, F, dF, p=rand(M), X=rand(M; vector_at=p); kwargs...)

Check numerically whether the differential dF(M,p,X) of F(M,p) is correct.

This implements the method described in [Bou23, Section 4.8].

Note that if the errors are below the given tolerance and the method is exact, no plot is generated,

Keyword arguments

  • exactness_tol=1e-12: if all errors are below this tolerance, the differential is considered to be exact
  • io=nothing: provide an IO to print the result to
  • limits=(1e-8,1): specify the limits in the log_range
  • log_range=range(limits[1], limits[2]; length=N): specify the range of points (in log scale) to sample the differential line
  • N=101: number of points to verify within the log_range default range $[10^{-8},10^{0}]$
  • name="differential": name to display in the plot
  • plot=false: whether to plot the result (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • slope_tol=0.1: tolerance for the slope (global) of the approximation
  • throw_error=false: throw an error message if the differential is wrong
  • window=nothing: specify window sizes within the log_range that are used for the slope estimation. The default is, to use all window sizes 2:N.
source
Manopt.check_gradientFunction
check_gradient(M, f, grad_f, p=rand(M), X=rand(M; vector_at=p); kwargs...)

Verify numerically whether the gradient grad_f(M,p) of f(M,p) is correct, that is whether

\[f(\operatorname{retr}_p(tX)) = f(p) + t⟨\operatorname{grad} f(p), X⟩ + \mathcal O(t^2)\]

or in other words, that the error between the function $f$ and its first order Taylor behaves in error $\mathcal O(t^2)$, which indicates that the gradient is correct, cf. also [Bou23, Section 4.8].

Note that if the errors are below the given tolerance and the method is exact, no plot is generated.

Keyword arguments

  • check_vector=true: verify that $\operatorname{grad}f(p) ∈ T_{p}\mathcal M$ using is_vector.
  • exactness_tol=1e-12: if all errors are below this tolerance, the gradient is considered to be exact
  • io=nothing: provide an IO to print the result to
  • gradient=grad_f(M, p): instead of the gradient function you can also provide the gradient at p directly
  • limits=(1e-8,1): specify the limits in the log_range
  • log_range=range(limits[1], limits[2]; length=N):
    • specify the range of points (in log scale) to sample the gradient line
  • N=101: number of points to verify within the log_range default range $[10^{-8},10^{0}]$
  • plot=false: whether to plot the result (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • slope_tol=0.1: tolerance for the slope (global) of the approximation
  • atol=:none`:

aults as=nothing: hat are passed down toisvectorifcheckvectoris set totrue`

  • error=:none: how to handle errors, possible values: :error, :info, :warn
  • window=nothing: specify window sizes within the log_range that are used for the slope estimation. the default is, to use all window sizes 2:N.

The remaining keyword arguments are also passed down to the check_vector call, such that tolerances can easily be set.

source
Manopt.is_Hessian_linearFunction
is_Hessian_linear(M, Hess_f, p,
     X=rand(M; vector_at=p), Y=rand(M; vector_at=p), a=randn(), b=randn();
     error=:none, io=nothing, kwargs...
 )

Verify whether the Hessian function Hess_f fulfills linearity,

\[\operatorname{Hess} f(p)[aX + bY] = b\operatorname{Hess} f(p)[X] - + b\operatorname{Hess} f(p)[Y]\]

which is checked using isapprox and the keyword arguments are passed to this function.

Optional arguments

  • error=:none: how to handle errors, possible values: :error, :info, :warn
source
Manopt.is_Hessian_symmetricFunction
is_Hessian_symmetric(M, Hess_f, p=rand(M), X=rand(M; vector_at=p), Y=rand(M; vector_at=p);
-error=:none, io=nothing, atol::Real=0, rtol::Real=atol>0 ? 0 : √eps

)

Verify whether the Hessian function Hess_f fulfills symmetry, which means that

\[⟨\operatorname{Hess} f(p)[X], Y⟩ = ⟨X, \operatorname{Hess} f(p)[Y]⟩\]

which is checked using isapprox and the kwargs... are passed to this function.

Optional arguments

  • atol, rtol with the same defaults as the usual isapprox
  • error=:none: how to handle errors, possible values: :error, :info, :warn
source

Literature

+ + b\operatorname{Hess} f(p)[Y]\]

which is checked using isapprox and the keyword arguments are passed to this function.

Optional arguments

  • error=:none: how to handle errors, possible values: :error, :info, :warn
source
Manopt.is_Hessian_symmetricFunction
is_Hessian_symmetric(M, Hess_f, p=rand(M), X=rand(M; vector_at=p), Y=rand(M; vector_at=p);
+error=:none, io=nothing, atol::Real=0, rtol::Real=atol>0 ? 0 : √eps

)

Verify whether the Hessian function Hess_f fulfills symmetry, which means that

\[⟨\operatorname{Hess} f(p)[X], Y⟩ = ⟨X, \operatorname{Hess} f(p)[Y]⟩\]

which is checked using isapprox and the kwargs... are passed to this function.

Optional arguments

  • atol, rtol with the same defaults as the usual isapprox
  • error=:none: how to handle errors, possible values: :error, :info, :warn
source

Literature

diff --git a/dev/helpers/exports/index.html b/dev/helpers/exports/index.html index c8052a1705..ded4803d83 100644 --- a/dev/helpers/exports/index.html +++ b/dev/helpers/exports/index.html @@ -1,2 +1,2 @@ -Exports · Manopt.jl

Exports

Exports aim to provide a consistent generation of images of your results. For example if you record the trace your algorithm walks on the Sphere, you can easily export this trace to a rendered image using asymptote_export_S2_signals and render the result with Asymptote. Despite these, you can always record values during your iterations, and export these, for example to csv.

Asymptote

The following functions provide exports both in graphics and/or raw data using Asymptote.

Manopt.asymptote_export_S2_dataMethod
asymptote_export_S2_data(filename)

Export given data as an array of points on the 2-sphere, which might be one-, two- or three-dimensional data with points on the Sphere $\mathbb S^2$.

Input

  • filename a file to store the Asymptote code in.

Optional arguments for the data

  • data a point representing the 1D,2D, or 3D array of points
  • elevation_color_scheme A ColorScheme for elevation
  • scale_axes=(1/3,1/3,1/3): move spheres closer to each other by a factor per direction

Optional arguments for asymptote

  • arrow_head_size=1.8: size of the arrowheads of the vectors (in mm)
  • camera_position position of the camera scene (default: atop the center of the data in the xy-plane)
  • target position the camera points at (default: center of xy-plane within data).
source
Manopt.asymptote_export_S2_signalsMethod
asymptote_export_S2_signals(filename; points, curves, tangent_vectors, colors, kwargs...)

Export given points, curves, and tangent_vectors on the sphere $\mathbb S^2$ to Asymptote.

Input

  • filename a file to store the Asymptote code in.

Keywaord arguments for the data

  • colors=Dict{Symbol,Array{RGBA{Float64},1}}(): dictionary of color arrays, indexed by symbols :points, :curves and :tvector, where each entry has to provide as least as many colors as the length of the corresponding sets.
  • curves=Array{Array{Float64,1},1}(undef, 0): an Array of Arrays of points on the sphere, where each inner array is interpreted as a curve and is accompanied by an entry within colors.
  • points=Array{Array{Float64,1},1}(undef, 0): an Array of Arrays of points on the sphere where each inner array is interpreted as a set of points and is accompanied by an entry within colors.
  • tangent_vectors=Array{Array{Tuple{Float64,Float64},1},1}(undef, 0): an Array of Arrays of tuples, where the first is a points, the second a tangent vector and each set of vectors is accompanied by an entry from within colors.

Keyword arguments for asymptote

  • arrow_head_size=6.0: size of the arrowheads of the tangent vectors
  • arrow_head_sizes overrides the previous value to specify a value per tVector` set.
  • camera_position=(1., 1., 0.): position of the camera in the Asymptote scene
  • line_width=1.0: size of the lines used to draw the curves.
  • line_widths overrides the previous value to specify a value per curve and tVector` set.
  • dot_size=1.0: size of the dots used to draw the points.
  • dot_sizes overrides the previous value to specify a value per point set.
  • size=nothing: a tuple for the image size, otherwise a relative size 4cm is used.
  • sphere_color=RGBA{Float64}(0.85, 0.85, 0.85, 0.6): color of the sphere the data is drawn on
  • sphere_line_color=RGBA{Float64}(0.75, 0.75, 0.75, 0.6): color of the lines on the sphere
  • sphere_line_width=0.5: line width of the lines on the sphere
  • target=(0.,0.,0.): position the camera points at
source
Manopt.asymptote_export_SPDMethod
asymptote_export_SPD(filename)

export given data as a point on a Power(SymmetricPOsitiveDefinnite(3))} manifold of one-, two- or three-dimensional data with points on the manifold of symmetric positive definite matrices.

Input

  • filename a file to store the Asymptote code in.

Optional arguments for the data

  • data a point representing the 1D, 2D, or 3D array of SPD matrices
  • color_scheme a ColorScheme for Geometric Anisotropy Index
  • scale_axes=(1/3,1/3,1/3): move symmetric positive definite matrices closer to each other by a factor per direction compared to the distance estimated by the maximal eigenvalue of all involved SPD points

Optional arguments for asymptote

  • camera_position position of the camera scene (default: atop the center of the data in the xy-plane)
  • target position the camera points at (default: center of xy-plane within data).

Both values camera_position and target are scaled by scaledAxes*EW, where EW is the maximal eigenvalue in the data.

source
Manopt.render_asymptoteMethod
render_asymptote(filename; render=4, format="png", ...)

render an exported asymptote file specified in the filename, which can also be given as a relative or full path

Input

  • filename filename of the exported asy and rendered image

Keyword arguments

the default values are given in brackets

  • render=4: render level of asymptote passed to its -render option. This can be removed from the command by setting it to nothing.
  • format="png": final rendered format passed to the -f option
  • export_file: (the filename with format as ending) specify the export filename
source
+Exports · Manopt.jl

Exports

Exports aim to provide a consistent generation of images of your results. For example if you record the trace your algorithm walks on the Sphere, you can easily export this trace to a rendered image using asymptote_export_S2_signals and render the result with Asymptote. Despite these, you can always record values during your iterations, and export these, for example to csv.

Asymptote

The following functions provide exports both in graphics and/or raw data using Asymptote.

Manopt.asymptote_export_S2_dataMethod
asymptote_export_S2_data(filename)

Export given data as an array of points on the 2-sphere, which might be one-, two- or three-dimensional data with points on the Sphere $\mathbb S^2$.

Input

  • filename a file to store the Asymptote code in.

Optional arguments for the data

  • data a point representing the 1D,2D, or 3D array of points
  • elevation_color_scheme A ColorScheme for elevation
  • scale_axes=(1/3,1/3,1/3): move spheres closer to each other by a factor per direction

Optional arguments for asymptote

  • arrow_head_size=1.8: size of the arrowheads of the vectors (in mm)
  • camera_position position of the camera scene (default: atop the center of the data in the xy-plane)
  • target position the camera points at (default: center of xy-plane within data).
source
Manopt.asymptote_export_S2_signalsMethod
asymptote_export_S2_signals(filename; points, curves, tangent_vectors, colors, kwargs...)

Export given points, curves, and tangent_vectors on the sphere $\mathbb S^2$ to Asymptote.

Input

  • filename a file to store the Asymptote code in.

Keywaord arguments for the data

  • colors=Dict{Symbol,Array{RGBA{Float64},1}}(): dictionary of color arrays, indexed by symbols :points, :curves and :tvector, where each entry has to provide as least as many colors as the length of the corresponding sets.
  • curves=Array{Array{Float64,1},1}(undef, 0): an Array of Arrays of points on the sphere, where each inner array is interpreted as a curve and is accompanied by an entry within colors.
  • points=Array{Array{Float64,1},1}(undef, 0): an Array of Arrays of points on the sphere where each inner array is interpreted as a set of points and is accompanied by an entry within colors.
  • tangent_vectors=Array{Array{Tuple{Float64,Float64},1},1}(undef, 0): an Array of Arrays of tuples, where the first is a points, the second a tangent vector and each set of vectors is accompanied by an entry from within colors.

Keyword arguments for asymptote

  • arrow_head_size=6.0: size of the arrowheads of the tangent vectors
  • arrow_head_sizes overrides the previous value to specify a value per tVector` set.
  • camera_position=(1., 1., 0.): position of the camera in the Asymptote scene
  • line_width=1.0: size of the lines used to draw the curves.
  • line_widths overrides the previous value to specify a value per curve and tVector` set.
  • dot_size=1.0: size of the dots used to draw the points.
  • dot_sizes overrides the previous value to specify a value per point set.
  • size=nothing: a tuple for the image size, otherwise a relative size 4cm is used.
  • sphere_color=RGBA{Float64}(0.85, 0.85, 0.85, 0.6): color of the sphere the data is drawn on
  • sphere_line_color=RGBA{Float64}(0.75, 0.75, 0.75, 0.6): color of the lines on the sphere
  • sphere_line_width=0.5: line width of the lines on the sphere
  • target=(0.,0.,0.): position the camera points at
source
Manopt.asymptote_export_SPDMethod
asymptote_export_SPD(filename)

export given data as a point on a Power(SymmetricPOsitiveDefinnite(3))} manifold of one-, two- or three-dimensional data with points on the manifold of symmetric positive definite matrices.

Input

  • filename a file to store the Asymptote code in.

Optional arguments for the data

  • data a point representing the 1D, 2D, or 3D array of SPD matrices
  • color_scheme a ColorScheme for Geometric Anisotropy Index
  • scale_axes=(1/3,1/3,1/3): move symmetric positive definite matrices closer to each other by a factor per direction compared to the distance estimated by the maximal eigenvalue of all involved SPD points

Optional arguments for asymptote

  • camera_position position of the camera scene (default: atop the center of the data in the xy-plane)
  • target position the camera points at (default: center of xy-plane within data).

Both values camera_position and target are scaled by scaledAxes*EW, where EW is the maximal eigenvalue in the data.

source
Manopt.render_asymptoteMethod
render_asymptote(filename; render=4, format="png", ...)

render an exported asymptote file specified in the filename, which can also be given as a relative or full path

Input

  • filename filename of the exported asy and rendered image

Keyword arguments

the default values are given in brackets

  • render=4: render level of asymptote passed to its -render option. This can be removed from the command by setting it to nothing.
  • format="png": final rendered format passed to the -f option
  • export_file: (the filename with format as ending) specify the export filename
source
diff --git a/dev/index.html b/dev/index.html index 20551305c8..96dcc58fca 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,5 +1,5 @@ -Home · Manopt.jl

Welcome to Manopt.jl

For a function $f:\mathcal M → ℝ$ defined on a Riemannian manifold $\mathcal M$ algorithms in this package aim to solve

\[\operatorname*{argmin}_{p ∈ \mathcal M} f(p),\]

or in other words: find the point $p$ on the manifold, where $f$ reaches its minimal function value.

Manopt.jl provides a framework for optimization on manifolds as well as a Library of optimization algorithms in Julia. It belongs to the “Manopt family”, which includes Manopt (Matlab) and pymanopt.org (Python).

If you want to delve right into Manopt.jl read the 🏔️ Get started: optimize. tutorial.

Manopt.jl makes it easy to use an algorithm for your favourite manifold as well as a manifold for your favourite algorithm. It already provides many manifolds and algorithms, which can easily be enhanced, for example to record certain data or debug output throughout iterations.

If you use Manopt.jlin your work, please cite the following

@article{Bergmann2022,
+Home · Manopt.jl

Welcome to Manopt.jl

For a function $f:\mathcal M → ℝ$ defined on a Riemannian manifold $\mathcal M$ algorithms in this package aim to solve

\[\operatorname*{argmin}_{p ∈ \mathcal M} f(p),\]

or in other words: find the point $p$ on the manifold, where $f$ reaches its minimal function value.

Manopt.jl provides a framework for optimization on manifolds as well as a Library of optimization algorithms in Julia. It belongs to the “Manopt family”, which includes Manopt (Matlab) and pymanopt.org (Python).

If you want to delve right into Manopt.jl read the 🏔️ Get started: optimize. tutorial.

Manopt.jl makes it easy to use an algorithm for your favourite manifold as well as a manifold for your favourite algorithm. It already provides many manifolds and algorithms, which can easily be enhanced, for example to record certain data or debug output throughout iterations.

If you use Manopt.jlin your work, please cite the following

@article{Bergmann2022,
     Author    = {Ronny Bergmann},
     Doi       = {10.21105/joss.03866},
     Journal   = {Journal of Open Source Software},
@@ -26,4 +26,4 @@
     TITLE     = {Manifolds.Jl: An Extensible Julia Framework for Data Analysis on Manifolds},
     VOLUME    = {49},
     YEAR      = {2023}
-}

Note that both citations are in BibLaTeX format.

Main features

Optimization algorithms (solvers)

For every optimization algorithm, a solver is implemented based on a AbstractManoptProblem that describes the problem to solve and its AbstractManoptSolverState that set up the solver, and stores values that are required between or for the next iteration. Together they form a plan.

Manifolds

This project is build upon ManifoldsBase.jl, a generic interface to implement manifolds. Certain functions are extended for specific manifolds from Manifolds.jl, but all other manifolds from that package can be used here, too.

The notation in the documentation aims to follow the same notation from these packages.

Visualization

To visualize and interpret results, Manopt.jl aims to provide both easy plot functions as well as exports. Furthermore a system to get debug during the iterations of an algorithms as well as record capabilities, for example to record a specified tuple of values per iteration, most prominently RecordCost and RecordIterate. Take a look at the 🏔️ Get started: optimize. tutorial on how to easily activate this.

Literature

If you want to get started with manifolds, one book is [Car92], and if you want do directly dive into optimization on manifolds, good references are [AMS08] and [Bou23], which are both available online for free

[AMS08]
P.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.
[Bou23]
[Car92]
M. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.
+}

Note that both citations are in BibLaTeX format.

Main features

Optimization algorithms (solvers)

For every optimization algorithm, a solver is implemented based on a AbstractManoptProblem that describes the problem to solve and its AbstractManoptSolverState that set up the solver, and stores values that are required between or for the next iteration. Together they form a plan.

Manifolds

This project is build upon ManifoldsBase.jl, a generic interface to implement manifolds. Certain functions are extended for specific manifolds from Manifolds.jl, but all other manifolds from that package can be used here, too.

The notation in the documentation aims to follow the same notation from these packages.

Visualization

To visualize and interpret results, Manopt.jl aims to provide both easy plot functions as well as exports. Furthermore a system to get debug during the iterations of an algorithms as well as record capabilities, for example to record a specified tuple of values per iteration, most prominently RecordCost and RecordIterate. Take a look at the 🏔️ Get started: optimize. tutorial on how to easily activate this.

Literature

If you want to get started with manifolds, one book is [Car92], and if you want do directly dive into optimization on manifolds, good references are [AMS08] and [Bou23], which are both available online for free

[AMS08]
P.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.
[Bou23]
[Car92]
M. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.
diff --git a/dev/notation/index.html b/dev/notation/index.html index c2ff70345a..4a8230097e 100644 --- a/dev/notation/index.html +++ b/dev/notation/index.html @@ -1,2 +1,2 @@ -Notation · Manopt.jl

Notation

In this package,the notation introduced in Manifolds.jl Notation is used with the following additional parts.

SymbolDescriptionAlso usedComment
$\operatorname{arg\,min}$argument of a function $f$ where a local or global minimum is attained
$k$the current iterate$ì$the goal is to unify this to k
$∇$The Levi-Cevita connection
+Notation · Manopt.jl

Notation

In this package,the notation introduced in Manifolds.jl Notation is used with the following additional parts.

SymbolDescriptionAlso usedComment
$\operatorname{arg\,min}$argument of a function $f$ where a local or global minimum is attained
$k$the current iterate$ì$the goal is to unify this to k
$∇$The Levi-Cevita connection
diff --git a/dev/plans/debug/index.html b/dev/plans/debug/index.html index 63b837be4d..2b73b45280 100644 --- a/dev/plans/debug/index.html +++ b/dev/plans/debug/index.html @@ -1,2 +1,2 @@ -Debug Output · Manopt.jl

Debug output

Debug output can easily be added to any solver run. On the high level interfaces, like gradient_descent, you can just use the debug= keyword.

Manopt.DebugActionType
DebugAction

A DebugAction is a small functor to print/issue debug output. The usual call is given by (p::AbstractManoptProblem, s::AbstractManoptSolverState, k) -> s, where i is the current iterate.

By convention i=0 is interpreted as "For Initialization only," only debug info that prints initialization reacts, i<0 triggers updates of variables internally but does not trigger any output.

Fields (assumed by subtypes to exist)

  • print method to perform the actual print. Can for example be set to a file export,

or to @info. The default is the print function on the default Base.stdout.

source
Manopt.DebugChangeType
DebugChange(M=DefaultManifold(); kwargs...)

debug for the amount of change of the iterate (stored in get_iterate(o) of the AbstractManoptSolverState) during the last iteration. See DebugEntryChange for the general case

Keyword parameters

the inverse retraction to be used for approximating distance.

source
Manopt.DebugCostType
DebugCost <: DebugAction

print the current cost function value, see get_cost.

Constructors

DebugCost()

Parameters

  • format="$prefix %f": format to print the output
  • io=stdout: default stream to print the debug to.
  • long=false: short form to set the format to f(x): (default) or current cost: and the cost
source
Manopt.DebugDividerType
DebugDivider <: DebugAction

print a small divider (default " | ").

Constructor

DebugDivider(div,print)
source
Manopt.DebugEntryType
DebugEntry <: DebugAction

print a certain fields entry during the iterates, where a format can be specified how to print the entry.

Additional fields

Constructor

DebugEntry(f; prefix="$f:", format = "$prefix %s", io=stdout)
source
Manopt.DebugEntryChangeType
DebugEntryChange{T} <: DebugAction

print a certain entries change during iterates

Additional fields

  • print: function to print the result
  • prefix: prefix to the print out
  • format: format to print (uses the prefix by default and scientific notation)
  • field: Symbol the field can be accessed with within AbstractManoptSolverState
  • distance: function (p,o,x1,x2) to compute the change/distance between two values of the entry
  • storage: a StoreStateAction to store the previous value of :f

Constructors

DebugEntryChange(f,d)

Keyword arguments

  • io=stdout: an IOStream used for the debug
  • prefix="Change of $f": the prefix
  • storage=StoreStateAction((f,)): a StoreStateAction
  • initial_value=NaN: an initial value for the change of o.field.
  • format="$prefix %e": format to print the change
source
Manopt.DebugEveryType
DebugEvery <: DebugAction

evaluate and print debug only every $k$th iteration. Otherwise no print is performed. Whether internal variables are updates is determined by always_update.

This method does not perform any print itself but relies on it's children's print.

It also sets the subsolvers active parameter, see |DebugWhenActive}(#ref). Here, the activattion_offset can be used to specify whether it refers to this iteration, the ith, when this call is before the iteration, then the offset should be 0, for the next iteration, that is if this is called after an iteration, it has to be set to 1. Since usual debug is happening after the iteration, 1 is the default.

Constructor

DebugEvery(d::DebugAction, every=1, always_update=true, activation_offset=1)
source
Manopt.DebugFeasibilityType
DebugFeasibility <: DebugAction

Display information about the feasibility of the current iterate

Fields

  • atol: absolute tolerance for when either equality or inequality constraints are counted as violated
  • format: a vector of symbols and string formatting the output
  • io: default stream to print the debug to.

The following symbols are filled with values

  • :Feasbile display true or false depending on whether the iterate is feasible
  • :FeasbileEq display = or equality constraints are fulfilled or not
  • :FeasbileInEq display or inequality constraints are fulfilled or not
  • :NumEq display the number of equality constraints infeasible
  • :NumEqNz display the number of equality constraints infeasible if exists
  • :NumIneq display the number of inequality constraints infeasible
  • :NumIneqNz display the number of inequality constraints infeasible if exists
  • :TotalEq display the sum of how much the equality constraints are violated
  • :TotalInEq display the sum of how much the inequality constraints are violated

format to print the output.

Constructor

DebugFeasibility( format=["feasible: ", :Feasible]; io::IO=stdout, atol=1e-13 )

source
Manopt.DebugGradientChangeType
DebugGradientChange()

debug for the amount of change of the gradient (stored in get_gradient(o) of the AbstractManoptSolverState o) during the last iteration. See DebugEntryChange for the general case

Keyword parameters

  • storage=StoreStateAction( (:Gradient,) ): storage of the action for previous data
  • prefix="Last Change:": prefix of the debug output (ignored if you set format:
  • io=stdout: default stream to print the debug to.
  • format="$prefix %f": format to print the output
source
Manopt.DebugGroupType
DebugGroup <: DebugAction

group a set of DebugActions into one action, where the internal prints are removed by default and the resulting strings are concatenated

Constructor

DebugGroup(g)

construct a group consisting of an Array of DebugActions g, that are evaluated en bloque; the method does not perform any print itself, but relies on the internal prints. It still concatenates the result and returns the complete string

source
Manopt.DebugIfEntryType
DebugIfEntry <: DebugAction

Issue a warning, info, or error if a certain field does not pass a the check.

The message is printed in this case. If it contains a @printf argument identifier, that one is filled with the value of the field. That way you can print the value in this case as well.

Fields

  • io: an IO stream
  • check: a function that takes the value of the field as input and returns a boolean
  • field: symbol the entry can be accessed with within AbstractManoptSolverState
  • msg: if the check fails, this message is displayed
  • type: symbol specifying the type of display, possible values :print, : warn, :info, :error, where :print prints to io.

Constructor

DebugEntry(field, check=(>(0)); type=:warn, message=":$f is nonnegative", io=stdout)
source
Manopt.DebugIterateType
DebugIterate <: DebugAction

debug for the current iterate (stored in get_iterate(o)).

Constructor

DebugIterate(; kwargs...)

Keyword arguments

  • io=stdout: default stream to print the debug to.
  • format="$prefix %s": format how to print the current iterate
  • long=false: whether to have a long ("current iterate:") or a short ("p:") prefix default
  • prefix: (see long for default) set a prefix to be printed before the iterate
source
Manopt.DebugIterationType
DebugIteration <: DebugAction

Constructor

DebugIteration()

Keyword parameters

  • format="# %-6d": format to print the output
  • io=stdout: default stream to print the debug to.

debug for the current iteration (prefixed with # by )

source
Manopt.DebugMessagesType
DebugMessages <: DebugAction

An AbstractManoptSolverState or one of its sub steps like a Stepsize might generate warnings throughout their computations. This debug can be used to :print them display them as :info or :warnings or even :error, depending on the message type.

Constructor

DebugMessages(mode=:Info, warn=:Once; io::IO=stdout)

Initialize the messages debug to a certain mode. Available modes are

  • :Error: issue the messages as an error and hence stop at any issue occurring
  • :Info: issue the messages as an @info
  • :Print: print messages to the steam io.
  • :Warning: issue the messages as a warning

The warn level can be set to :Once to only display only the first message, to :Always to report every message, one can set it to :No, to deactivate this, then this DebugAction is inactive. All other symbols are handled as if they were :Always:

source
Manopt.DebugSolverStateType
DebugSolverState <: AbstractManoptSolverState

The debug state appends debug to any state, they act as a decorator pattern. Internally a dictionary is kept that stores a DebugAction for several occasions using a Symbol as reference.

The original options can still be accessed using the get_state function.

Fields

  • options: the options that are extended by debug information
  • debugDictionary: a Dict{Symbol,DebugAction} to keep track of Debug for different actions

Constructors

DebugSolverState(o,dA)

construct debug decorated options, where dD can be

  • a DebugAction, then it is stored within the dictionary at :Iteration
  • an Array of DebugActions.
  • a Dict{Symbol,DebugAction}.
  • an Array of Symbols, String and an Int for the DebugFactory
source
Manopt.DebugStoppingCriterionType
DebugStoppingCriterion <: DebugAction

print the Reason provided by the stopping criterion. Usually this should be empty, unless the algorithm stops.

Fields

  • prefix="": format to print the output
  • io=stdout: default stream to print the debug to.

Constructor

DebugStoppingCriterion(prefix = ""; io::IO=stdout)

source
Manopt.DebugTimeType
DebugTime()

Measure time and print the intervals. Using start=true you can start the timer on construction, for example to measure the runtime of an algorithm overall (adding)

The measured time is rounded using the given time_accuracy and printed after canonicalization.

Keyword parameters

  • io=stdout: default stream to print the debug to.
  • format="$prefix %s": format to print the output, where %s is the canonicalized time`.
  • mode=:cumulative: whether to display the total time or reset on every call using :iterative.
  • prefix="Last Change:": prefix of the debug output (ignored if you set format:
  • start=false: indicate whether to start the timer on creation or not. Otherwise it might only be started on first call.
  • time_accuracy=Millisecond(1): round the time to this period before printing the canonicalized time
source
Manopt.DebugWarnIfCostIncreasesType
DebugWarnIfCostIncreases <: DebugAction

print a warning if the cost increases.

Note that this provides an additional warning for gradient descent with its default constant step size.

Constructor

DebugWarnIfCostIncreases(warn=:Once; tol=1e-13)

Initialize the warning to warning level (:Once) and introduce a tolerance for the test of 1e-13.

The warn level can be set to :Once to only warn the first time the cost increases, to :Always to report an increase every time it happens, and it can be set to :No to deactivate the warning, then this DebugAction is inactive. All other symbols are handled as if they were :Always:

source
Manopt.DebugWarnIfCostNotFiniteType
DebugWarnIfCostNotFinite <: DebugAction

A debug to see when a field (value or array within the AbstractManoptSolverState is or contains values that are not finite, for example Inf or Nan.

Constructor

DebugWarnIfCostNotFinite(field::Symbol, warn=:Once)

Initialize the warning to warn :Once.

This can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:

source
Manopt.DebugWarnIfFieldNotFiniteType
DebugWarnIfFieldNotFinite <: DebugAction

A debug to see when a field from the options is not finite, for example Inf or Nan

Constructor

DebugWarnIfFieldNotFinite(field::Symbol, warn=:Once)

Initialize the warning to warn :Once.

This can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:

Example

DebugWaranIfFieldNotFinite(:Gradient)

Creates a [DebugAction] to track whether the gradient does not get Nan or Inf.

source
Manopt.DebugWarnIfGradientNormTooLargeType
DebugWarnIfGradientNormTooLarge{T} <: DebugAction

A debug to warn when an evaluated gradient at the current iterate is larger than (a factor times) the maximal (recommended) stepsize at the current iterate.

Constructor

DebugWarnIfGradientNormTooLarge(factor::T=1.0, warn=:Once)

Initialize the warning to warn :Once.

This can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:

Example

DebugWaranIfFieldNotFinite(:Gradient)

Creates a [DebugAction] to track whether the gradient does not get Nan or Inf.

source
Manopt.DebugWhenActiveType
DebugWhenActive <: DebugAction

evaluate and print debug only if the active boolean is set. This can be set from outside and is for example triggered by DebugEvery on debugs on the subsolver.

This method does not perform any print itself but relies on it's children's prints.

For now, the main interaction is with DebugEvery which might activate or deactivate this debug

Fields

  • active: a boolean that can (de-)activated from outside to turn on/off debug
  • always_update: whether or not to call the order debugs with iteration <=0 inactive state

Constructor

DebugWhenActive(d::DebugAction, active=true, always_update=true)
source
Manopt.DebugActionFactoryMethod
DebugActionFactory(s)

create a DebugAction where

  • a Stringyields the corresponding divider
  • a DebugAction is passed through
  • a [Symbol] creates DebugEntry of that symbol, with the exceptions of :Change, :Iterate, :Iteration, and :Cost.
  • a Tuple{Symbol,String} creates a DebugEntry of that symbol where the String specifies the format.
source
Manopt.DebugActionFactoryMethod
DebugActionFactory(s::Symbol)

Convert certain Symbols in the debug=[ ... ] vector to DebugActions Currently the following ones are done. Note that the Shortcut symbols should all start with a capital letter.

any other symbol creates a DebugEntry(s) to print the entry (o.:s) from the options.

source
Manopt.DebugActionFactoryMethod
DebugActionFactory(t::Tuple{Symbol,String)

Convert certain Symbols in the debug=[ ... ] vector to DebugActions Currently the following ones are done, where the string in t[2] is passed as the format the corresponding debug. Note that the Shortcut symbols t[1] should all start with a capital letter.

any other symbol creates a DebugEntry(s) to print the entry (o.:s) from the options.

source
Manopt.DebugFactoryMethod
DebugFactory(a::Vector)

Generate a dictionary of DebugActions.

First all Symbols String, DebugActions and numbers are collected, excluding :Stop and :WhenActive. This collected vector is added to the :Iteration => [...] pair. :Stop is added as :StoppingCriterion to the :Stop => [...] pair. If necessary, these pairs are created

For each Pair of a Symbol and a Vector, the DebugGroupFactory is called for the Vector and the result is added to the debug dictionary's entry with said symbol. This is wrapped into the DebugWhenActive, when the :WhenActive symbol is present

Return value

A dictionary for the different enrty points where debug can happen, each containing a DebugAction to call.

Note that upon the initialisation all dictionaries but the :StartAlgorithm one are called with an i=0 for reset.

Examples

  1. Providing a simple vector of symbols, numbers and strings like

    [:Iterate, " | ", :Cost, :Stop, 10]

    Adds a group to :Iteration of three actions (DebugIteration, DebugDivider(" | "), and[DebugCost](@ref)) as a [DebugGroup](@ref) inside an [DebugEvery](@ref) to only be executed every 10th iteration. It also adds the [DebugStoppingCriterion](@ref) to the:EndAlgorithm` entry of the dictionary.

  2. The same can also be written a bit more precise as

    DebugFactory([:Iteration => [:Iterate, " | ", :Cost, 10], :Stop])
  3. We can even make the stoping criterion concrete and pass Actions directly, for example explicitly Making the stop more concrete, we get

    DebugFactory([:Iteration => [:Iterate, " | ", DebugCost(), 10], :Stop => [:Stop]])
source
Manopt.DebugGroupFactoryMethod
DebugGroupFactory(a::Vector)

Generate a DebugGroup of DebugActions. The following rules are used

  1. Any Symbol is passed to DebugActionFactory
  2. Any (Symbol, String) generates similar actions as in 1., but the string is used for format=, see DebugActionFactory
  3. Any String is passed to DebugActionFactory
  4. Any DebugAction is included as is.

If this results in more than one DebugAction a DebugGroup of these is build.

If any integers are present, the last of these is used to wrap the group in a DebugEvery(k).

If :WhenActive is present, the resulting Action is wrapped in DebugWhenActive, making it deactivatable by its parent solver.

source
Manopt.set_parameter!Method
set_parameter!(ams::DebugSolverState, ::Val{:Debug}, args...)

Set certain values specified by args... into the elements of the debugDictionary

source

Technical details

The decorator to print debug during the iterations can be activated by decorating the state of a solver and implementing your own DebugActions. For example printing a gradient from the GradientDescentState is automatically available, as explained in the gradient_descent solver.

Manopt.initialize_solver!Method
initialize_solver!(amp::AbstractManoptProblem, dss::DebugSolverState)

Extend the initialization of the solver by a hook to run the DebugAction that was added to the :Start entry of the debug lists. All others are triggered (with iteration number 0) to trigger possible resets

source
Manopt.step_solver!Method
step_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)

Extend the ith step of the solver by a hook to run debug prints, that were added to the :BeforeIteration and :Iteration entries of the debug lists.

source
Manopt.stop_solver!Method
stop_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)

Extend the stop_solver!, whether to stop the solver by a hook to run debug, that were added to the :Stop entry of the debug lists.

source
+Debug Output · Manopt.jl

Debug output

Debug output can easily be added to any solver run. On the high level interfaces, like gradient_descent, you can just use the debug= keyword.

Manopt.DebugActionType
DebugAction

A DebugAction is a small functor to print/issue debug output. The usual call is given by (p::AbstractManoptProblem, s::AbstractManoptSolverState, k) -> s, where i is the current iterate.

By convention i=0 is interpreted as "For Initialization only," only debug info that prints initialization reacts, i<0 triggers updates of variables internally but does not trigger any output.

Fields (assumed by subtypes to exist)

  • print method to perform the actual print. Can for example be set to a file export,

or to @info. The default is the print function on the default Base.stdout.

source
Manopt.DebugChangeType
DebugChange(M=DefaultManifold(); kwargs...)

debug for the amount of change of the iterate (stored in get_iterate(o) of the AbstractManoptSolverState) during the last iteration. See DebugEntryChange for the general case

Keyword parameters

the inverse retraction to be used for approximating distance.

source
Manopt.DebugCostType
DebugCost <: DebugAction

print the current cost function value, see get_cost.

Constructors

DebugCost()

Parameters

  • format="$prefix %f": format to print the output
  • io=stdout: default stream to print the debug to.
  • long=false: short form to set the format to f(x): (default) or current cost: and the cost
source
Manopt.DebugDividerType
DebugDivider <: DebugAction

print a small divider (default " | ").

Constructor

DebugDivider(div,print)
source
Manopt.DebugEntryType
DebugEntry <: DebugAction

print a certain fields entry during the iterates, where a format can be specified how to print the entry.

Additional fields

Constructor

DebugEntry(f; prefix="$f:", format = "$prefix %s", io=stdout)
source
Manopt.DebugEntryChangeType
DebugEntryChange{T} <: DebugAction

print a certain entries change during iterates

Additional fields

  • print: function to print the result
  • prefix: prefix to the print out
  • format: format to print (uses the prefix by default and scientific notation)
  • field: Symbol the field can be accessed with within AbstractManoptSolverState
  • distance: function (p,o,x1,x2) to compute the change/distance between two values of the entry
  • storage: a StoreStateAction to store the previous value of :f

Constructors

DebugEntryChange(f,d)

Keyword arguments

  • io=stdout: an IOStream used for the debug
  • prefix="Change of $f": the prefix
  • storage=StoreStateAction((f,)): a StoreStateAction
  • initial_value=NaN: an initial value for the change of o.field.
  • format="$prefix %e": format to print the change
source
Manopt.DebugEveryType
DebugEvery <: DebugAction

evaluate and print debug only every $k$th iteration. Otherwise no print is performed. Whether internal variables are updates is determined by always_update.

This method does not perform any print itself but relies on it's children's print.

It also sets the subsolvers active parameter, see |DebugWhenActive}(#ref). Here, the activattion_offset can be used to specify whether it refers to this iteration, the ith, when this call is before the iteration, then the offset should be 0, for the next iteration, that is if this is called after an iteration, it has to be set to 1. Since usual debug is happening after the iteration, 1 is the default.

Constructor

DebugEvery(d::DebugAction, every=1, always_update=true, activation_offset=1)
source
Manopt.DebugFeasibilityType
DebugFeasibility <: DebugAction

Display information about the feasibility of the current iterate

Fields

  • atol: absolute tolerance for when either equality or inequality constraints are counted as violated
  • format: a vector of symbols and string formatting the output
  • io: default stream to print the debug to.

The following symbols are filled with values

  • :Feasbile display true or false depending on whether the iterate is feasible
  • :FeasbileEq display = or equality constraints are fulfilled or not
  • :FeasbileInEq display or inequality constraints are fulfilled or not
  • :NumEq display the number of equality constraints infeasible
  • :NumEqNz display the number of equality constraints infeasible if exists
  • :NumIneq display the number of inequality constraints infeasible
  • :NumIneqNz display the number of inequality constraints infeasible if exists
  • :TotalEq display the sum of how much the equality constraints are violated
  • :TotalInEq display the sum of how much the inequality constraints are violated

format to print the output.

Constructor

DebugFeasibility( format=["feasible: ", :Feasible]; io::IO=stdout, atol=1e-13 )

source
Manopt.DebugGradientChangeType
DebugGradientChange()

debug for the amount of change of the gradient (stored in get_gradient(o) of the AbstractManoptSolverState o) during the last iteration. See DebugEntryChange for the general case

Keyword parameters

  • storage=StoreStateAction( (:Gradient,) ): storage of the action for previous data
  • prefix="Last Change:": prefix of the debug output (ignored if you set format:
  • io=stdout: default stream to print the debug to.
  • format="$prefix %f": format to print the output
source
Manopt.DebugGroupType
DebugGroup <: DebugAction

group a set of DebugActions into one action, where the internal prints are removed by default and the resulting strings are concatenated

Constructor

DebugGroup(g)

construct a group consisting of an Array of DebugActions g, that are evaluated en bloque; the method does not perform any print itself, but relies on the internal prints. It still concatenates the result and returns the complete string

source
Manopt.DebugIfEntryType
DebugIfEntry <: DebugAction

Issue a warning, info, or error if a certain field does not pass a the check.

The message is printed in this case. If it contains a @printf argument identifier, that one is filled with the value of the field. That way you can print the value in this case as well.

Fields

  • io: an IO stream
  • check: a function that takes the value of the field as input and returns a boolean
  • field: symbol the entry can be accessed with within AbstractManoptSolverState
  • msg: if the check fails, this message is displayed
  • type: symbol specifying the type of display, possible values :print, : warn, :info, :error, where :print prints to io.

Constructor

DebugEntry(field, check=(>(0)); type=:warn, message=":$f is nonnegative", io=stdout)
source
Manopt.DebugIterateType
DebugIterate <: DebugAction

debug for the current iterate (stored in get_iterate(o)).

Constructor

DebugIterate(; kwargs...)

Keyword arguments

  • io=stdout: default stream to print the debug to.
  • format="$prefix %s": format how to print the current iterate
  • long=false: whether to have a long ("current iterate:") or a short ("p:") prefix default
  • prefix: (see long for default) set a prefix to be printed before the iterate
source
Manopt.DebugIterationType
DebugIteration <: DebugAction

Constructor

DebugIteration()

Keyword parameters

  • format="# %-6d": format to print the output
  • io=stdout: default stream to print the debug to.

debug for the current iteration (prefixed with # by )

source
Manopt.DebugMessagesType
DebugMessages <: DebugAction

An AbstractManoptSolverState or one of its sub steps like a Stepsize might generate warnings throughout their computations. This debug can be used to :print them display them as :info or :warnings or even :error, depending on the message type.

Constructor

DebugMessages(mode=:Info, warn=:Once; io::IO=stdout)

Initialize the messages debug to a certain mode. Available modes are

  • :Error: issue the messages as an error and hence stop at any issue occurring
  • :Info: issue the messages as an @info
  • :Print: print messages to the steam io.
  • :Warning: issue the messages as a warning

The warn level can be set to :Once to only display only the first message, to :Always to report every message, one can set it to :No, to deactivate this, then this DebugAction is inactive. All other symbols are handled as if they were :Always:

source
Manopt.DebugSolverStateType
DebugSolverState <: AbstractManoptSolverState

The debug state appends debug to any state, they act as a decorator pattern. Internally a dictionary is kept that stores a DebugAction for several occasions using a Symbol as reference.

The original options can still be accessed using the get_state function.

Fields

  • options: the options that are extended by debug information
  • debugDictionary: a Dict{Symbol,DebugAction} to keep track of Debug for different actions

Constructors

DebugSolverState(o,dA)

construct debug decorated options, where dD can be

  • a DebugAction, then it is stored within the dictionary at :Iteration
  • an Array of DebugActions.
  • a Dict{Symbol,DebugAction}.
  • an Array of Symbols, String and an Int for the DebugFactory
source
Manopt.DebugStoppingCriterionType
DebugStoppingCriterion <: DebugAction

print the Reason provided by the stopping criterion. Usually this should be empty, unless the algorithm stops.

Fields

  • prefix="": format to print the output
  • io=stdout: default stream to print the debug to.

Constructor

DebugStoppingCriterion(prefix = ""; io::IO=stdout)

source
Manopt.DebugTimeType
DebugTime()

Measure time and print the intervals. Using start=true you can start the timer on construction, for example to measure the runtime of an algorithm overall (adding)

The measured time is rounded using the given time_accuracy and printed after canonicalization.

Keyword parameters

  • io=stdout: default stream to print the debug to.
  • format="$prefix %s": format to print the output, where %s is the canonicalized time`.
  • mode=:cumulative: whether to display the total time or reset on every call using :iterative.
  • prefix="Last Change:": prefix of the debug output (ignored if you set format:
  • start=false: indicate whether to start the timer on creation or not. Otherwise it might only be started on first call.
  • time_accuracy=Millisecond(1): round the time to this period before printing the canonicalized time
source
Manopt.DebugWarnIfCostIncreasesType
DebugWarnIfCostIncreases <: DebugAction

print a warning if the cost increases.

Note that this provides an additional warning for gradient descent with its default constant step size.

Constructor

DebugWarnIfCostIncreases(warn=:Once; tol=1e-13)

Initialize the warning to warning level (:Once) and introduce a tolerance for the test of 1e-13.

The warn level can be set to :Once to only warn the first time the cost increases, to :Always to report an increase every time it happens, and it can be set to :No to deactivate the warning, then this DebugAction is inactive. All other symbols are handled as if they were :Always:

source
Manopt.DebugWarnIfCostNotFiniteType
DebugWarnIfCostNotFinite <: DebugAction

A debug to see when a field (value or array within the AbstractManoptSolverState is or contains values that are not finite, for example Inf or Nan.

Constructor

DebugWarnIfCostNotFinite(field::Symbol, warn=:Once)

Initialize the warning to warn :Once.

This can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:

source
Manopt.DebugWarnIfFieldNotFiniteType
DebugWarnIfFieldNotFinite <: DebugAction

A debug to see when a field from the options is not finite, for example Inf or Nan

Constructor

DebugWarnIfFieldNotFinite(field::Symbol, warn=:Once)

Initialize the warning to warn :Once.

This can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:

Example

DebugWaranIfFieldNotFinite(:Gradient)

Creates a [DebugAction] to track whether the gradient does not get Nan or Inf.

source
Manopt.DebugWarnIfGradientNormTooLargeType
DebugWarnIfGradientNormTooLarge{T} <: DebugAction

A debug to warn when an evaluated gradient at the current iterate is larger than (a factor times) the maximal (recommended) stepsize at the current iterate.

Constructor

DebugWarnIfGradientNormTooLarge(factor::T=1.0, warn=:Once)

Initialize the warning to warn :Once.

This can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:

Example

DebugWaranIfFieldNotFinite(:Gradient)

Creates a [DebugAction] to track whether the gradient does not get Nan or Inf.

source
Manopt.DebugWhenActiveType
DebugWhenActive <: DebugAction

evaluate and print debug only if the active boolean is set. This can be set from outside and is for example triggered by DebugEvery on debugs on the subsolver.

This method does not perform any print itself but relies on it's children's prints.

For now, the main interaction is with DebugEvery which might activate or deactivate this debug

Fields

  • active: a boolean that can (de-)activated from outside to turn on/off debug
  • always_update: whether or not to call the order debugs with iteration <=0 inactive state

Constructor

DebugWhenActive(d::DebugAction, active=true, always_update=true)
source
Manopt.DebugActionFactoryMethod
DebugActionFactory(s)

create a DebugAction where

  • a Stringyields the corresponding divider
  • a DebugAction is passed through
  • a [Symbol] creates DebugEntry of that symbol, with the exceptions of :Change, :Iterate, :Iteration, and :Cost.
  • a Tuple{Symbol,String} creates a DebugEntry of that symbol where the String specifies the format.
source
Manopt.DebugActionFactoryMethod
DebugActionFactory(s::Symbol)

Convert certain Symbols in the debug=[ ... ] vector to DebugActions Currently the following ones are done. Note that the Shortcut symbols should all start with a capital letter.

any other symbol creates a DebugEntry(s) to print the entry (o.:s) from the options.

source
Manopt.DebugActionFactoryMethod
DebugActionFactory(t::Tuple{Symbol,String)

Convert certain Symbols in the debug=[ ... ] vector to DebugActions Currently the following ones are done, where the string in t[2] is passed as the format the corresponding debug. Note that the Shortcut symbols t[1] should all start with a capital letter.

any other symbol creates a DebugEntry(s) to print the entry (o.:s) from the options.

source
Manopt.DebugFactoryMethod
DebugFactory(a::Vector)

Generate a dictionary of DebugActions.

First all Symbols String, DebugActions and numbers are collected, excluding :Stop and :WhenActive. This collected vector is added to the :Iteration => [...] pair. :Stop is added as :StoppingCriterion to the :Stop => [...] pair. If necessary, these pairs are created

For each Pair of a Symbol and a Vector, the DebugGroupFactory is called for the Vector and the result is added to the debug dictionary's entry with said symbol. This is wrapped into the DebugWhenActive, when the :WhenActive symbol is present

Return value

A dictionary for the different enrty points where debug can happen, each containing a DebugAction to call.

Note that upon the initialisation all dictionaries but the :StartAlgorithm one are called with an i=0 for reset.

Examples

  1. Providing a simple vector of symbols, numbers and strings like

    [:Iterate, " | ", :Cost, :Stop, 10]

    Adds a group to :Iteration of three actions (DebugIteration, DebugDivider(" | "), and[DebugCost](@ref)) as a [DebugGroup](@ref) inside an [DebugEvery](@ref) to only be executed every 10th iteration. It also adds the [DebugStoppingCriterion](@ref) to the:EndAlgorithm` entry of the dictionary.

  2. The same can also be written a bit more precise as

    DebugFactory([:Iteration => [:Iterate, " | ", :Cost, 10], :Stop])
  3. We can even make the stoping criterion concrete and pass Actions directly, for example explicitly Making the stop more concrete, we get

    DebugFactory([:Iteration => [:Iterate, " | ", DebugCost(), 10], :Stop => [:Stop]])
source
Manopt.DebugGroupFactoryMethod
DebugGroupFactory(a::Vector)

Generate a DebugGroup of DebugActions. The following rules are used

  1. Any Symbol is passed to DebugActionFactory
  2. Any (Symbol, String) generates similar actions as in 1., but the string is used for format=, see DebugActionFactory
  3. Any String is passed to DebugActionFactory
  4. Any DebugAction is included as is.

If this results in more than one DebugAction a DebugGroup of these is build.

If any integers are present, the last of these is used to wrap the group in a DebugEvery(k).

If :WhenActive is present, the resulting Action is wrapped in DebugWhenActive, making it deactivatable by its parent solver.

source
Manopt.set_parameter!Method
set_parameter!(ams::DebugSolverState, ::Val{:Debug}, args...)

Set certain values specified by args... into the elements of the debugDictionary

source

Technical details

The decorator to print debug during the iterations can be activated by decorating the state of a solver and implementing your own DebugActions. For example printing a gradient from the GradientDescentState is automatically available, as explained in the gradient_descent solver.

Manopt.initialize_solver!Method
initialize_solver!(amp::AbstractManoptProblem, dss::DebugSolverState)

Extend the initialization of the solver by a hook to run the DebugAction that was added to the :Start entry of the debug lists. All others are triggered (with iteration number 0) to trigger possible resets

source
Manopt.step_solver!Method
step_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)

Extend the ith step of the solver by a hook to run debug prints, that were added to the :BeforeIteration and :Iteration entries of the debug lists.

source
Manopt.stop_solver!Method
stop_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)

Extend the stop_solver!, whether to stop the solver by a hook to run debug, that were added to the :Stop entry of the debug lists.

source
diff --git a/dev/plans/index.html b/dev/plans/index.html index c227467806..ccf563832c 100644 --- a/dev/plans/index.html +++ b/dev/plans/index.html @@ -1,4 +1,4 @@ -Specify a Solver · Manopt.jl

Plans for solvers

For any optimisation performed in Manopt.jl information is required about both the optimisation task or “problem” at hand as well as the solver and all its parameters. This together is called a plan in Manopt.jl and it consists of two data structures:

  • The Manopt Problem describes all static data of a task, most prominently the manifold and the objective.
  • The Solver State describes all varying data and parameters for the solver that is used. This also means that each solver has its own data structure for the state.

By splitting these two parts, one problem can be define an then be solved using different solvers.

Still there might be the need to set certain parameters within any of these structures. For that there is

Manopt.set_parameter!Function
set_parameter!(f, element::Symbol , args...)

For any f and a Symbol e, dispatch on its value so by default, to set some args... in f or one of uts sub elements.

source
set_parameter!(element::Symbol, value::Union{String,Bool,<:Number})

Set global Manopt parameters addressed by a symbol element. W This first dispatches on the value of element.

The parameters are stored to the global settings using Preferences.jl.

Passing a value of "" deletes the corresponding entry from the preferences. Whenever the LocalPreferences.toml is modified, this is also issued as an @info.

source
set_parameter!(amo::AbstractManifoldObjective, element::Symbol, args...)

Set a certain args... from the AbstractManifoldObjective amo to value. This function should dispatch onVal(element)`.

Currently supported

source
set_parameter!(ams::AbstractManoptProblem, element::Symbol, field::Symbol , value)

Set a certain field/element from the AbstractManoptProblem ams to value. This function usually dispatches on Val(element). Instead of a single field, also a chain of elements can be provided, allowing to access encapsulated parts of the problem.

Main values for element are :Manifold and :Objective.

source
set_parameter!(ams::DebugSolverState, ::Val{:Debug}, args...)

Set certain values specified by args... into the elements of the debugDictionary

source
set_parameter!(ams::RecordSolverState, ::Val{:Record}, args...)

Set certain values specified by args... into the elements of the recordDictionary

source
set_parameter!(c::StopAfter, :MaxTime, v::Period)

Update the time period after which an algorithm shall stop.

source
set_parameter!(c::StopAfterIteration, :;MaxIteration, v::Int)

Update the number of iterations after which the algorithm should stop.

source
set_parameter!(c::StopWhenChangeLess, :MinIterateChange, v::Int)

Update the minimal change below which an algorithm shall stop.

source
set_parameter!(c::StopWhenCostLess, :MinCost, v)

Update the minimal cost below which the algorithm shall stop

source
set_parameter!(c::StopWhenEntryChangeLess, :Threshold, v)

Update the minimal cost below which the algorithm shall stop

source
set_parameter!(c::StopWhenGradientChangeLess, :MinGradientChange, v)

Update the minimal change below which an algorithm shall stop.

source
set_parameter!(c::StopWhenGradientNormLess, :MinGradNorm, v::Float64)

Update the minimal gradient norm when an algorithm shall stop

source
set_parameter!(c::StopWhenStepsizeLess, :MinStepsize, v)

Update the minimal step size below which the algorithm shall stop

source
set_parameter!(c::StopWhenSubgradientNormLess, :MinSubgradNorm, v::Float64)

Update the minimal subgradient norm when an algorithm shall stop

source
set_parameter!(ams::AbstractManoptSolverState, element::Symbol, args...)

Set a certain field or semantic element from the AbstractManoptSolverState ams to value. This function passes to Val(element) and specific setters should dispatch on Val{element}.

By default, this function just does nothing.

source
set_parameter!(ams::DebugSolverState, ::Val{:SubProblem}, args...)

Set certain values specified by args... to the sub problem.

source
set_parameter!(ams::DebugSolverState, ::Val{:SubState}, args...)

Set certain values specified by args... to the sub state.

source
set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualPower, v)

Update the residual Power θ to v.

source
set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualFactor, v)

Update the residual Factor κ to v.

source
Manopt.get_parameterFunction
get_parameter(f, element::Symbol, args...)

Access arbitrary parameters from f addressed by a symbol element.

For any f and a Symbol e dispatch on its value by default, to get some element from f potentially further qualified by args....

This functions returns nothing if f does not have the property element

source
get_parameter(element::Symbol; default=nothing)

Access global Manopt parameters addressed by a symbol element. This first dispatches on the value of element.

If the value is not set, default is returned.

The parameters are queried from the global settings using Preferences.jl, so they are persistent within your activated Environment.

Currently used settings

:Mode the mode can be set to "Tutorial" to get several hints especially in scenarios, where the optimisation on manifolds is different from the usual “experience” in (classical, Euclidean) optimization. Any other value has the same effect as not setting it.

source
Manopt.status_summaryFunction
status_summary(e)

Return a string reporting about the current status of e, where e is a type from Manopt.

This method is similar to show but just returns a string. It might also be more verbose in explaining, or hide internal information.

source

The following symbols are used.

SymbolUsed inDescription
:ActivityDebugWhenActiveactivity of the debug action stored within
:BasepointTangentSpacethe point the tangent space is at
:Costgenericthe cost function (within an objective, as pass down)
:DebugDebugSolverStatethe stored debugDictionary
:Gradientgenericthe gradient function (within an objective, as pass down)
:Iterategenericthe (current) iterate, similar to set_iterate!, within a state
:Manifoldgenericthe manifold (within a problem, as pass down)
:Objectivegenericthe objective (within a problem, as pass down)
:SubProblemgenericthe sub problem (within a state, as pass down)
:SubStategenericthe sub state (within a state, as pass down)
ProximalDCCost, ProximalDCGradset the proximal parameter within the proximal sub objective elements
:PopulationParticleSwarmStatea certain population of points, for example particle_swarms swarm
:RecordRecordSolverState
:TrustRegionRadiusTrustRegionsStatethe trust region radius, equivalent to
, :uExactPenaltyCost, ExactPenaltyGradParameters within the exact penalty objective
, , AugmentedLagrangianCost, AugmentedLagrangianGradParameters of the Lagrangian function
:p, :XLinearizedDCCost, LinearizedDCGradParameters withing the linearized functional used for the sub problem of the difference of convex algorithm

Any other lower case name or letter as well as single upper case letters access fields of the corresponding first argument. for example :p could be used to access the field s.p of a state. This is often, where the iterate is stored, so the recommended way is to use :Iterate from before.

Since the iterate is often stored in the states fields s.p one could access the iterate often also with :p and similarly the gradient with :X. This is discouraged for both readability as well as to stay more generic, and it is recommended to use :Iterate and :Gradient instead in generic settings.

You can further activate a “Tutorial” mode by set_parameter!(:Mode, "Tutorial"). Internally, the following convenience function is available.

Manopt.is_tutorial_modeFunction
is_tutorial_mode()

A small internal helper to indicate whether tutorial mode is active.

You can set the mode by calling set_parameter!(:Mode, "Tutorial") or deactivate it by set_parameter!(:Mode, "").

source

A factory for providing manifold defaults

In several cases a manifold might not yet be known at the time a (keyword) argument should be provided. Therefore, any type with a manifold default can be wrapped into a factory.

Manopt.ManifoldDefaultsFactoryType
ManifoldDefaultsFactory{M,T,A,K}

A generic factory to postpone the instantiation of certain types from within Manopt.jl, in order to be able to adapt it to defaults from different manifolds and/or postpone the decission on which manifold to use to a later point

For now this is established for

This factory stores necessary and optional parameters as well as keyword arguments provided by the user to later produce the type this factory is for.

Besides a manifold as a fallback, the factory can also be used for the (maybe simpler) types from the list of types that do not require the manifold.

Fields

  • M::Union{Nothing,AbstractManifold}: provide a manifold for defaults
  • args::A: arguments (args...) that are passed to the type constructor
  • kwargs::K: keyword arguments (kwargs...) that are passed to the type constructor
  • constructor_requires_manifold::Bool: indicate whether the type construtor requires the manifold or not

Constructor

ManifoldDefaultsFactory(T, args...; kwargs...)
-ManifoldDefaultsFactory(T, M, args...; kwargs...)

Input

  • T a subtype of types listed above that this factory is to produce
  • M (optional) a manifold used for the defaults in case no manifold is provided.
  • args... arguments to pass to the constructor of T
  • kwargs... keyword arguments to pass (overwrite) when constructing T.

Keyword arguments

  • requires_manifold=true: indicate whether the type constructor this factory wraps requires the manifold as first argument or not.

All other keyword arguments are internally stored to be used in the type constructor

as well as arguments and keyword arguments for the update rule.

see also

_produce_type

source
Manopt._produce_typeFunction
_produce_type(t::T, M::AbstractManifold)
-_produce_type(t::ManifoldDefaultsFactory{T}, M::AbstractManifold)

Use the ManifoldDefaultsFactory{T} to produce an instance of type T. This acts transparent in the way that if you provide an instance t::T already, this will just be returned.

source
+Specify a Solver · Manopt.jl

Plans for solvers

For any optimisation performed in Manopt.jl information is required about both the optimisation task or “problem” at hand as well as the solver and all its parameters. This together is called a plan in Manopt.jl and it consists of two data structures:

  • The Manopt Problem describes all static data of a task, most prominently the manifold and the objective.
  • The Solver State describes all varying data and parameters for the solver that is used. This also means that each solver has its own data structure for the state.

By splitting these two parts, one problem can be define an then be solved using different solvers.

Still there might be the need to set certain parameters within any of these structures. For that there is

Manopt.set_parameter!Function
set_parameter!(f, element::Symbol , args...)

For any f and a Symbol e, dispatch on its value so by default, to set some args... in f or one of uts sub elements.

source
set_parameter!(element::Symbol, value::Union{String,Bool,<:Number})

Set global Manopt parameters addressed by a symbol element. W This first dispatches on the value of element.

The parameters are stored to the global settings using Preferences.jl.

Passing a value of "" deletes the corresponding entry from the preferences. Whenever the LocalPreferences.toml is modified, this is also issued as an @info.

source
set_parameter!(amo::AbstractManifoldObjective, element::Symbol, args...)

Set a certain args... from the AbstractManifoldObjective amo to value. This function should dispatch onVal(element)`.

Currently supported

source
set_parameter!(ams::AbstractManoptProblem, element::Symbol, field::Symbol , value)

Set a certain field/element from the AbstractManoptProblem ams to value. This function usually dispatches on Val(element). Instead of a single field, also a chain of elements can be provided, allowing to access encapsulated parts of the problem.

Main values for element are :Manifold and :Objective.

source
set_parameter!(ams::DebugSolverState, ::Val{:Debug}, args...)

Set certain values specified by args... into the elements of the debugDictionary

source
set_parameter!(ams::RecordSolverState, ::Val{:Record}, args...)

Set certain values specified by args... into the elements of the recordDictionary

source
set_parameter!(c::StopAfter, :MaxTime, v::Period)

Update the time period after which an algorithm shall stop.

source
set_parameter!(c::StopAfterIteration, :;MaxIteration, v::Int)

Update the number of iterations after which the algorithm should stop.

source
set_parameter!(c::StopWhenChangeLess, :MinIterateChange, v::Int)

Update the minimal change below which an algorithm shall stop.

source
set_parameter!(c::StopWhenCostLess, :MinCost, v)

Update the minimal cost below which the algorithm shall stop

source
set_parameter!(c::StopWhenEntryChangeLess, :Threshold, v)

Update the minimal cost below which the algorithm shall stop

source
set_parameter!(c::StopWhenGradientChangeLess, :MinGradientChange, v)

Update the minimal change below which an algorithm shall stop.

source
set_parameter!(c::StopWhenGradientNormLess, :MinGradNorm, v::Float64)

Update the minimal gradient norm when an algorithm shall stop

source
set_parameter!(c::StopWhenStepsizeLess, :MinStepsize, v)

Update the minimal step size below which the algorithm shall stop

source
set_parameter!(c::StopWhenSubgradientNormLess, :MinSubgradNorm, v::Float64)

Update the minimal subgradient norm when an algorithm shall stop

source
set_parameter!(ams::AbstractManoptSolverState, element::Symbol, args...)

Set a certain field or semantic element from the AbstractManoptSolverState ams to value. This function passes to Val(element) and specific setters should dispatch on Val{element}.

By default, this function just does nothing.

source
set_parameter!(ams::DebugSolverState, ::Val{:SubProblem}, args...)

Set certain values specified by args... to the sub problem.

source
set_parameter!(ams::DebugSolverState, ::Val{:SubState}, args...)

Set certain values specified by args... to the sub state.

source
set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualPower, v)

Update the residual Power θ to v.

source
set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualFactor, v)

Update the residual Factor κ to v.

source
Manopt.get_parameterFunction
get_parameter(f, element::Symbol, args...)

Access arbitrary parameters from f addressed by a symbol element.

For any f and a Symbol e dispatch on its value by default, to get some element from f potentially further qualified by args....

This functions returns nothing if f does not have the property element

source
get_parameter(element::Symbol; default=nothing)

Access global Manopt parameters addressed by a symbol element. This first dispatches on the value of element.

If the value is not set, default is returned.

The parameters are queried from the global settings using Preferences.jl, so they are persistent within your activated Environment.

Currently used settings

:Mode the mode can be set to "Tutorial" to get several hints especially in scenarios, where the optimisation on manifolds is different from the usual “experience” in (classical, Euclidean) optimization. Any other value has the same effect as not setting it.

source
Manopt.status_summaryFunction
status_summary(e)

Return a string reporting about the current status of e, where e is a type from Manopt.

This method is similar to show but just returns a string. It might also be more verbose in explaining, or hide internal information.

source

The following symbols are used.

SymbolUsed inDescription
:ActivityDebugWhenActiveactivity of the debug action stored within
:BasepointTangentSpacethe point the tangent space is at
:Costgenericthe cost function (within an objective, as pass down)
:DebugDebugSolverStatethe stored debugDictionary
:Gradientgenericthe gradient function (within an objective, as pass down)
:Iterategenericthe (current) iterate, similar to set_iterate!, within a state
:Manifoldgenericthe manifold (within a problem, as pass down)
:Objectivegenericthe objective (within a problem, as pass down)
:SubProblemgenericthe sub problem (within a state, as pass down)
:SubStategenericthe sub state (within a state, as pass down)
ProximalDCCost, ProximalDCGradset the proximal parameter within the proximal sub objective elements
:PopulationParticleSwarmStatea certain population of points, for example particle_swarms swarm
:RecordRecordSolverState
:TrustRegionRadiusTrustRegionsStatethe trust region radius, equivalent to
, :uExactPenaltyCost, ExactPenaltyGradParameters within the exact penalty objective
, , AugmentedLagrangianCost, AugmentedLagrangianGradParameters of the Lagrangian function
:p, :XLinearizedDCCost, LinearizedDCGradParameters withing the linearized functional used for the sub problem of the difference of convex algorithm

Any other lower case name or letter as well as single upper case letters access fields of the corresponding first argument. for example :p could be used to access the field s.p of a state. This is often, where the iterate is stored, so the recommended way is to use :Iterate from before.

Since the iterate is often stored in the states fields s.p one could access the iterate often also with :p and similarly the gradient with :X. This is discouraged for both readability as well as to stay more generic, and it is recommended to use :Iterate and :Gradient instead in generic settings.

You can further activate a “Tutorial” mode by set_parameter!(:Mode, "Tutorial"). Internally, the following convenience function is available.

Manopt.is_tutorial_modeFunction
is_tutorial_mode()

A small internal helper to indicate whether tutorial mode is active.

You can set the mode by calling set_parameter!(:Mode, "Tutorial") or deactivate it by set_parameter!(:Mode, "").

source

A factory for providing manifold defaults

In several cases a manifold might not yet be known at the time a (keyword) argument should be provided. Therefore, any type with a manifold default can be wrapped into a factory.

Manopt.ManifoldDefaultsFactoryType
ManifoldDefaultsFactory{M,T,A,K}

A generic factory to postpone the instantiation of certain types from within Manopt.jl, in order to be able to adapt it to defaults from different manifolds and/or postpone the decission on which manifold to use to a later point

For now this is established for

This factory stores necessary and optional parameters as well as keyword arguments provided by the user to later produce the type this factory is for.

Besides a manifold as a fallback, the factory can also be used for the (maybe simpler) types from the list of types that do not require the manifold.

Fields

  • M::Union{Nothing,AbstractManifold}: provide a manifold for defaults
  • args::A: arguments (args...) that are passed to the type constructor
  • kwargs::K: keyword arguments (kwargs...) that are passed to the type constructor
  • constructor_requires_manifold::Bool: indicate whether the type construtor requires the manifold or not

Constructor

ManifoldDefaultsFactory(T, args...; kwargs...)
+ManifoldDefaultsFactory(T, M, args...; kwargs...)

Input

  • T a subtype of types listed above that this factory is to produce
  • M (optional) a manifold used for the defaults in case no manifold is provided.
  • args... arguments to pass to the constructor of T
  • kwargs... keyword arguments to pass (overwrite) when constructing T.

Keyword arguments

  • requires_manifold=true: indicate whether the type constructor this factory wraps requires the manifold as first argument or not.

All other keyword arguments are internally stored to be used in the type constructor

as well as arguments and keyword arguments for the update rule.

see also

_produce_type

source
Manopt._produce_typeFunction
_produce_type(t::T, M::AbstractManifold)
+_produce_type(t::ManifoldDefaultsFactory{T}, M::AbstractManifold)

Use the ManifoldDefaultsFactory{T} to produce an instance of type T. This acts transparent in the way that if you provide an instance t::T already, this will just be returned.

source
diff --git a/dev/plans/objective/index.html b/dev/plans/objective/index.html index 9b6f577538..cee0123402 100644 --- a/dev/plans/objective/index.html +++ b/dev/plans/objective/index.html @@ -1,14 +1,14 @@ -Objective · Manopt.jl

A manifold objective

The Objective describes that actual cost function and all its properties.

Manopt.AbstractManifoldObjectiveType
AbstractManifoldObjective{E<:AbstractEvaluationType}

Describe the collection of the optimization function $f: \mathcal M → ℝ$ (or even a vectorial range) and its corresponding elements, which might for example be a gradient or (one or more) proximal maps.

All these elements should usually be implemented as functions (M, p) -> ..., or (M, X, p) -> ... that is

  • the first argument of these functions should be the manifold M they are defined on
  • the argument X is present, if the computation is performed in-place of X (see InplaceEvaluation)
  • the argument p is the place the function ($f$ or one of its elements) is evaluated at.

the type T indicates the global AbstractEvaluationType.

source

Which has two main different possibilities for its containing functions concerning the evaluation mode, not necessarily the cost, but for example gradient in an AbstractManifoldGradientObjective.

Decorators for objectives

An objective can be decorated using the following trait and function to initialize

Manopt.dispatch_objective_decoratorFunction
dispatch_objective_decorator(o::AbstractManoptSolverState)

Indicate internally, whether an AbstractManifoldObjective o to be of decorating type, it stores (encapsulates) an object in itself, by default in the field o.objective.

Decorators indicate this by returning Val{true} for further dispatch.

The default is Val{false}, so by default an state is not decorated.

source
Manopt.decorate_objective!Function
decorate_objective!(M, o::AbstractManifoldObjective)

decorate the AbstractManifoldObjectiveo with specific decorators.

Optional arguments

optional arguments provide necessary details on the decorators. A specific one is used to activate certain decorators.

  • cache=missing: specify a cache. Currently :Simple is supported and :LRU if you load LRUCache.jl. For this case a tuple specifying what to cache and how many can be provided, has to be specified. For example (:LRU, [:Cost, :Gradient], 10) states that the last 10 used cost function evaluations and gradient evaluations should be stored. See objective_cache_factory for details.
  • count=missing: specify calls to the objective to be called, see ManifoldCountObjective for the full list
  • objective_type=:Riemannian: specify that an objective is :Riemannian or :Euclidean. The :Euclidean symbol is equivalent to specifying it as :Embedded, since in the end, both refer to converting an objective from the embedding (whether its Euclidean or not) to the Riemannian one.

See also

objective_cache_factory

source

Embedded objectives

Manopt.EmbeddedManifoldObjectiveType
EmbeddedManifoldObjective{P, T, E, O2, O1<:AbstractManifoldObjective{E}} <:
-   AbstractDecoratedManifoldObjective{E,O2}

Declare an objective to be defined in the embedding. This also declares the gradient to be defined in the embedding, and especially being the Riesz representer with respect to the metric in the embedding. The types can be used to still dispatch on also the undecorated objective type O2.

Fields

  • objective: the objective that is defined in the embedding
  • p=nothing: a point in the embedding.
  • X=nothing: a tangent vector in the embedding

When a point in the embedding p is provided, embed! is used in place of this point to reduce memory allocations. Similarly X is used when embedding tangent vectors

source

Cache objective

Since single function calls, for example to the cost or the gradient, might be expensive, a simple cache objective exists as a decorator, that caches one cost value or gradient.

It can be activated/used with the cache= keyword argument available for every solver.

Manopt.reset_counters!Function
reset_counters(co::ManifoldCountObjective, value::Integer=0)

Reset all values in the count objective to value.

source
Manopt.objective_cache_factoryFunction
objective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Symbol)

Generate a cached variant of the AbstractManifoldObjective o on the AbstractManifold M based on the symbol cache.

The following caches are available

  • :Simple generates a SimpleManifoldCachedObjective
  • :LRU generates a ManifoldCachedObjective where you should use the form (:LRU, [:Cost, :Gradient]) to specify what should be cached or (:LRU, [:Cost, :Gradient], 100) to specify the cache size. Here this variant defaults to (:LRU, [:Cost, :Gradient], 100), caching up to 100 cost and gradient values.[1]
source
objective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Tuple{Symbol, Array, Array})
-objective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Tuple{Symbol, Array})

Generate a cached variant of the AbstractManifoldObjective o on the AbstractManifold M based on the symbol cache[1], where the second element cache[2] are further arguments to the cache and the optional third is passed down as keyword arguments.

For all available caches see the simpler variant with symbols.

source

A simple cache

A first generic cache is always available, but it only caches one gradient and one cost function evaluation (for the same point).

Manopt.SimpleManifoldCachedObjectiveType
 SimpleManifoldCachedObjective{O<:AbstractManifoldGradientObjective{E,TC,TG}, P, T,C} <: AbstractManifoldGradientObjective{E,TC,TG}

Provide a simple cache for an AbstractManifoldGradientObjective that is for a given point p this cache stores a point p and a gradient $\operatorname{grad} f(p)$ in X as well as a cost value $f(p)$ in c.

Both X and c are accompanied by booleans to keep track of their validity.

Constructor

SimpleManifoldCachedObjective(M::AbstractManifold, obj::AbstractManifoldGradientObjective; kwargs...)

Keyword arguments

  • p=rand(M): a point on the manifold to initialize the cache with
  • X=get_gradient(M, obj, p) or zero_vector(M,p): a tangent vector to store the gradient in, see also initialize=
  • c=[get_cost](@ref)(M, obj, p)or0.0: a value to store the cost function ininitialize`
  • initialized=true: whether to initialize the cached X and c or not.
source

A generic cache

For the more advanced cache, you need to implement some type of cache yourself, that provides a get! and implement init_caches. This is for example provided if you load LRUCache.jl. Then you obtain

Manopt.ManifoldCachedObjectiveType
ManifoldCachedObjective{E,P,O<:AbstractManifoldObjective{<:E},C<:NamedTuple{}} <: AbstractDecoratedManifoldObjective{E,P}

Create a cache for an objective, based on a NamedTuple that stores some kind of cache.

Constructor

ManifoldCachedObjective(M, o::AbstractManifoldObjective, caches::Vector{Symbol}; kwargs...)

Create a cache for the AbstractManifoldObjective where the Symbols in caches indicate, which function evaluations to cache.

Supported symbols

SymbolCaches calls to (incl. ! variants)Comment
:Costget_cost
:EqualityConstraintget_equality_constraint(M, p, i)
:EqualityConstraintsget_equality_constraint(M, p, :)
:GradEqualityConstraintget_grad_equality_constrainttangent vector per (p,i)
:GradInequalityConstraintget_inequality_constrainttangent vector per (p,i)
:Gradientget_gradient(M,p)tangent vectors
:Hessianget_hessiantangent vectors
:InequalityConstraintget_inequality_constraint(M, p, j)
:InequalityConstraintsget_inequality_constraint(M, p, :)
:Preconditionerget_preconditionertangent vectors
:ProximalMapget_proximal_mappoint per (p,λ,i)
:StochasticGradientsget_gradientsvector of tangent vectors
:StochasticGradientget_gradient(M, p, i)tangent vector per (p,i)
:SubGradientget_subgradienttangent vectors
:SubtrahendGradientget_subtrahend_gradienttangent vectors

Keyword arguments

  • p=rand(M): the type of the keys to be used in the caches. Defaults to the default representation on M.
  • value=get_cost(M, objective, p): the type of values for numeric values in the cache
  • X=zero_vector(M,p): the type of values to be cached for gradient and Hessian calls.
  • cache=[:Cost]: a vector of symbols indicating which function calls should be cached.
  • cache_size=10: number of (least recently used) calls to cache
  • cache_sizes=Dict{Symbol,Int}(): a named tuple or dictionary specifying the sizes individually for each cache.
source
Manopt.init_cachesFunction
init_caches(caches, T::Type{LRU}; kwargs...)

Given a vector of symbols caches, this function sets up the NamedTuple of caches, where T is the type of cache to use.

Keyword arguments

  • p=rand(M): a point on a manifold, to both infer its type for keys and initialize caches
  • value=0.0: a value both typing and initialising number-caches, the default is for (Float) values like the cost.
  • X=zero_vector(M, p): a tangent vector at p to both type and initialize tangent vector caches
  • cache_size=10: a default cache size to use
  • cache_sizes=Dict{Symbol,Int}(): a dictionary of sizes for the caches to specify different (non-default) sizes
source
init_caches(M::AbstractManifold, caches, T; kwargs...)

Given a vector of symbols caches, this function sets up the NamedTuple of caches for points/vectors on M, where T is the type of cache to use.

source

Count objective

Manopt.ManifoldCountObjectiveType
ManifoldCountObjective{E,P,O<:AbstractManifoldObjective,I<:Integer} <: AbstractDecoratedManifoldObjective{E,P}

A wrapper for any AbstractManifoldObjective of type O to count different calls to parts of the objective.

Fields

  • counts a dictionary of symbols mapping to integers keeping the counted values
  • objective the wrapped objective

Supported symbols

SymbolCounts calls to (incl. ! variants)Comment
:Costget_cost
:EqualityConstraintget_equality_constraintrequires vector of counters
:EqualityConstraintsget_equality_constraintwhen evaluating all of them with :
:GradEqualityConstraintget_grad_equality_constraintrequires vector of counters
:GradEqualityConstraintsget_grad_equality_constraintwhen evaluating all of them with :
:GradInequalityConstraintget_inequality_constraintrequires vector of counters
:GradInequalityConstraintsget_inequality_constraintwhen evaluating all of them with :
:Gradientget_gradient(M,p)
:Hessianget_hessian
:InequalityConstraintget_inequality_constraintrequires vector of counters
:InequalityConstraintsget_inequality_constraintwhen evaluating all of them with :
:Preconditionerget_preconditioner
:ProximalMapget_proximal_map
:StochasticGradientsget_gradients
:StochasticGradientget_gradient(M, p, i)
:SubGradientget_subgradient
:SubtrahendGradientget_subtrahend_gradient

Constructors

ManifoldCountObjective(objective::AbstractManifoldObjective, counts::Dict{Symbol, <:Integer})

Initialise the ManifoldCountObjective to wrap objective initializing the set of counts

ManifoldCountObjective(M::AbstractManifold, objective::AbstractManifoldObjective, count::AbstractVecor{Symbol}, init=0)

Count function calls on objective using the symbols in count initialising all entries to init.

source

Internal decorators

Manopt.ReturnManifoldObjectiveType
ReturnManifoldObjective{E,O2,O1<:AbstractManifoldObjective{E}} <:
-   AbstractDecoratedManifoldObjective{E,O2}

A wrapper to indicate that get_solver_result should return the inner objective.

The types are such that one can still dispatch on the undecorated type O2 of the original objective as well.

source

Specific Objective typed and their access functions

Cost objective

Manopt.ManifoldCostObjectiveType
ManifoldCostObjective{T, TC} <: AbstractManifoldCostObjective{T, TC}

specify an AbstractManifoldObjective that does only have information about the cost function $f: \mathbb M → ℝ$ implemented as a function (M, p) -> c to compute the cost value c at p on the manifold M.

  • cost: a function $f: \mathcal M → ℝ$ to minimize

Constructors

ManifoldCostObjective(f)

Generate a problem. While this Problem does not have any allocating functions, the type T can be set for consistency reasons with other problems.

Used with

NelderMead, particle_swarm

source

Access functions

Manopt.get_costFunction
get_cost(amp::AbstractManoptProblem, p)

evaluate the cost function f stored within the AbstractManifoldObjective of an AbstractManoptProblem amp at the point p.

source
get_cost(M::AbstractManifold, obj::AbstractManifoldObjective, p)

evaluate the cost function f defined on M stored within the AbstractManifoldObjective at the point p.

source
get_cost(M::AbstractManifold, mco::AbstractManifoldCostObjective, p)

Evaluate the cost function from within the AbstractManifoldCostObjective on M at p.

By default this implementation assumed that the cost is stored within mco.cost.

source
get_cost(TpM, trmo::TrustRegionModelObjective, X)

Evaluate the tangent space TrustRegionModelObjective

\[m(X) = f(p) + ⟨\operatorname{grad} f(p), X ⟩_p + \frac{1}{2} ⟨\operatorname{Hess} f(p)[X], X⟩_p.\]

source
get_cost(TpM, trmo::AdaptiveRagularizationWithCubicsModelObjective, X)

Evaluate the tangent space AdaptiveRagularizationWithCubicsModelObjective

\[m(X) = f(p) + ⟨\operatorname{grad} f(p), X ⟩_p + \frac{1}{2} ⟨\operatorname{Hess} f(p)[X], X⟩_p - + \frac{σ}{3} \lVert X \rVert^3,\]

at X, cf. Eq. (33) in [ABBC20].

source
get_cost(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)

evaluate the cost

\[f(X) = \frac{1}{2} \lVert \mathcal A[X] + b \rVert_{p}^2,\qquad X ∈ T_{p}\mathcal M,\]

at X.

source
get_cost(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p, i)

Evaluate the ith summand of the cost.

If you use a single function for the stochastic cost, then only the index ì=1` is available to evaluate the whole cost.

source
get_cost(M::AbstractManifold,emo::EmbeddedManifoldObjective, p)

Evaluate the cost function of an objective defined in the embedding by first embedding p before calling the cost function stored in the EmbeddedManifoldObjective.

source

and internally

Manopt.get_cost_functionFunction
get_cost_function(amco::AbstractManifoldCostObjective)

return the function to evaluate (just) the cost $f(p)=c$ as a function (M,p) -> c.

source

Gradient objectives

Manopt.ManifoldGradientObjectiveType
ManifoldGradientObjective{T<:AbstractEvaluationType} <: AbstractManifoldGradientObjective{T}

specify an objective containing a cost and its gradient

Fields

  • cost: a function $f: \mathcal M → ℝ$
  • gradient!!: the gradient $\operatorname{grad}f: \mathcal M → \mathcal T\mathcal M$ of the cost function $f$.

Depending on the AbstractEvaluationType T the gradient can have to forms

Constructors

ManifoldGradientObjective(cost, gradient; evaluation=AllocatingEvaluation())

Used with

gradient_descent, conjugate_gradient_descent, quasi_Newton

source
Manopt.ManifoldAlternatingGradientObjectiveType
ManifoldAlternatingGradientObjective{E<:AbstractEvaluationType,TCost,TGradient} <: AbstractManifoldGradientObjective{E}

An alternating gradient objective consists of

  • a cost function $F(x)$
  • a gradient $\operatorname{grad}F$ that is either
    • given as one function $\operatorname{grad}F$ returning a tangent vector X on M or
    • an array of gradient functions $\operatorname{grad}F_i$, ì=1,…,n s each returning a component of the gradient
    which might be allocating or mutating variants, but not a mix of both.
Note

This Objective is usually defined using the ProductManifold from Manifolds.jl, so Manifolds.jl to be loaded.

Constructors

ManifoldAlternatingGradientObjective(F, gradF::Function;
+Objective · Manopt.jl

A manifold objective

The Objective describes that actual cost function and all its properties.

Manopt.AbstractManifoldObjectiveType
AbstractManifoldObjective{E<:AbstractEvaluationType}

Describe the collection of the optimization function $f: \mathcal M → ℝ$ (or even a vectorial range) and its corresponding elements, which might for example be a gradient or (one or more) proximal maps.

All these elements should usually be implemented as functions (M, p) -> ..., or (M, X, p) -> ... that is

  • the first argument of these functions should be the manifold M they are defined on
  • the argument X is present, if the computation is performed in-place of X (see InplaceEvaluation)
  • the argument p is the place the function ($f$ or one of its elements) is evaluated at.

the type T indicates the global AbstractEvaluationType.

source

Which has two main different possibilities for its containing functions concerning the evaluation mode, not necessarily the cost, but for example gradient in an AbstractManifoldGradientObjective.

Decorators for objectives

An objective can be decorated using the following trait and function to initialize

Manopt.dispatch_objective_decoratorFunction
dispatch_objective_decorator(o::AbstractManoptSolverState)

Indicate internally, whether an AbstractManifoldObjective o to be of decorating type, it stores (encapsulates) an object in itself, by default in the field o.objective.

Decorators indicate this by returning Val{true} for further dispatch.

The default is Val{false}, so by default an state is not decorated.

source
Manopt.decorate_objective!Function
decorate_objective!(M, o::AbstractManifoldObjective)

decorate the AbstractManifoldObjectiveo with specific decorators.

Optional arguments

optional arguments provide necessary details on the decorators. A specific one is used to activate certain decorators.

  • cache=missing: specify a cache. Currently :Simple is supported and :LRU if you load LRUCache.jl. For this case a tuple specifying what to cache and how many can be provided, has to be specified. For example (:LRU, [:Cost, :Gradient], 10) states that the last 10 used cost function evaluations and gradient evaluations should be stored. See objective_cache_factory for details.
  • count=missing: specify calls to the objective to be called, see ManifoldCountObjective for the full list
  • objective_type=:Riemannian: specify that an objective is :Riemannian or :Euclidean. The :Euclidean symbol is equivalent to specifying it as :Embedded, since in the end, both refer to converting an objective from the embedding (whether its Euclidean or not) to the Riemannian one.

See also

objective_cache_factory

source

Embedded objectives

Manopt.EmbeddedManifoldObjectiveType
EmbeddedManifoldObjective{P, T, E, O2, O1<:AbstractManifoldObjective{E}} <:
+   AbstractDecoratedManifoldObjective{E,O2}

Declare an objective to be defined in the embedding. This also declares the gradient to be defined in the embedding, and especially being the Riesz representer with respect to the metric in the embedding. The types can be used to still dispatch on also the undecorated objective type O2.

Fields

  • objective: the objective that is defined in the embedding
  • p=nothing: a point in the embedding.
  • X=nothing: a tangent vector in the embedding

When a point in the embedding p is provided, embed! is used in place of this point to reduce memory allocations. Similarly X is used when embedding tangent vectors

source

Cache objective

Since single function calls, for example to the cost or the gradient, might be expensive, a simple cache objective exists as a decorator, that caches one cost value or gradient.

It can be activated/used with the cache= keyword argument available for every solver.

Manopt.reset_counters!Function
reset_counters(co::ManifoldCountObjective, value::Integer=0)

Reset all values in the count objective to value.

source
Manopt.objective_cache_factoryFunction
objective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Symbol)

Generate a cached variant of the AbstractManifoldObjective o on the AbstractManifold M based on the symbol cache.

The following caches are available

  • :Simple generates a SimpleManifoldCachedObjective
  • :LRU generates a ManifoldCachedObjective where you should use the form (:LRU, [:Cost, :Gradient]) to specify what should be cached or (:LRU, [:Cost, :Gradient], 100) to specify the cache size. Here this variant defaults to (:LRU, [:Cost, :Gradient], 100), caching up to 100 cost and gradient values.[1]
source
objective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Tuple{Symbol, Array, Array})
+objective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Tuple{Symbol, Array})

Generate a cached variant of the AbstractManifoldObjective o on the AbstractManifold M based on the symbol cache[1], where the second element cache[2] are further arguments to the cache and the optional third is passed down as keyword arguments.

For all available caches see the simpler variant with symbols.

source

A simple cache

A first generic cache is always available, but it only caches one gradient and one cost function evaluation (for the same point).

Manopt.SimpleManifoldCachedObjectiveType
 SimpleManifoldCachedObjective{O<:AbstractManifoldGradientObjective{E,TC,TG}, P, T,C} <: AbstractManifoldGradientObjective{E,TC,TG}

Provide a simple cache for an AbstractManifoldGradientObjective that is for a given point p this cache stores a point p and a gradient $\operatorname{grad} f(p)$ in X as well as a cost value $f(p)$ in c.

Both X and c are accompanied by booleans to keep track of their validity.

Constructor

SimpleManifoldCachedObjective(M::AbstractManifold, obj::AbstractManifoldGradientObjective; kwargs...)

Keyword arguments

  • p=rand(M): a point on the manifold to initialize the cache with
  • X=get_gradient(M, obj, p) or zero_vector(M,p): a tangent vector to store the gradient in, see also initialize=
  • c=[get_cost](@ref)(M, obj, p)or0.0: a value to store the cost function ininitialize`
  • initialized=true: whether to initialize the cached X and c or not.
source

A generic cache

For the more advanced cache, you need to implement some type of cache yourself, that provides a get! and implement init_caches. This is for example provided if you load LRUCache.jl. Then you obtain

Manopt.ManifoldCachedObjectiveType
ManifoldCachedObjective{E,P,O<:AbstractManifoldObjective{<:E},C<:NamedTuple{}} <: AbstractDecoratedManifoldObjective{E,P}

Create a cache for an objective, based on a NamedTuple that stores some kind of cache.

Constructor

ManifoldCachedObjective(M, o::AbstractManifoldObjective, caches::Vector{Symbol}; kwargs...)

Create a cache for the AbstractManifoldObjective where the Symbols in caches indicate, which function evaluations to cache.

Supported symbols

SymbolCaches calls to (incl. ! variants)Comment
:Costget_cost
:EqualityConstraintget_equality_constraint(M, p, i)
:EqualityConstraintsget_equality_constraint(M, p, :)
:GradEqualityConstraintget_grad_equality_constrainttangent vector per (p,i)
:GradInequalityConstraintget_inequality_constrainttangent vector per (p,i)
:Gradientget_gradient(M,p)tangent vectors
:Hessianget_hessiantangent vectors
:InequalityConstraintget_inequality_constraint(M, p, j)
:InequalityConstraintsget_inequality_constraint(M, p, :)
:Preconditionerget_preconditionertangent vectors
:ProximalMapget_proximal_mappoint per (p,λ,i)
:StochasticGradientsget_gradientsvector of tangent vectors
:StochasticGradientget_gradient(M, p, i)tangent vector per (p,i)
:SubGradientget_subgradienttangent vectors
:SubtrahendGradientget_subtrahend_gradienttangent vectors

Keyword arguments

  • p=rand(M): the type of the keys to be used in the caches. Defaults to the default representation on M.
  • value=get_cost(M, objective, p): the type of values for numeric values in the cache
  • X=zero_vector(M,p): the type of values to be cached for gradient and Hessian calls.
  • cache=[:Cost]: a vector of symbols indicating which function calls should be cached.
  • cache_size=10: number of (least recently used) calls to cache
  • cache_sizes=Dict{Symbol,Int}(): a named tuple or dictionary specifying the sizes individually for each cache.
source
Manopt.init_cachesFunction
init_caches(caches, T::Type{LRU}; kwargs...)

Given a vector of symbols caches, this function sets up the NamedTuple of caches, where T is the type of cache to use.

Keyword arguments

  • p=rand(M): a point on a manifold, to both infer its type for keys and initialize caches
  • value=0.0: a value both typing and initialising number-caches, the default is for (Float) values like the cost.
  • X=zero_vector(M, p): a tangent vector at p to both type and initialize tangent vector caches
  • cache_size=10: a default cache size to use
  • cache_sizes=Dict{Symbol,Int}(): a dictionary of sizes for the caches to specify different (non-default) sizes
source
init_caches(M::AbstractManifold, caches, T; kwargs...)

Given a vector of symbols caches, this function sets up the NamedTuple of caches for points/vectors on M, where T is the type of cache to use.

source

Count objective

Manopt.ManifoldCountObjectiveType
ManifoldCountObjective{E,P,O<:AbstractManifoldObjective,I<:Integer} <: AbstractDecoratedManifoldObjective{E,P}

A wrapper for any AbstractManifoldObjective of type O to count different calls to parts of the objective.

Fields

  • counts a dictionary of symbols mapping to integers keeping the counted values
  • objective the wrapped objective

Supported symbols

SymbolCounts calls to (incl. ! variants)Comment
:Costget_cost
:EqualityConstraintget_equality_constraintrequires vector of counters
:EqualityConstraintsget_equality_constraintwhen evaluating all of them with :
:GradEqualityConstraintget_grad_equality_constraintrequires vector of counters
:GradEqualityConstraintsget_grad_equality_constraintwhen evaluating all of them with :
:GradInequalityConstraintget_inequality_constraintrequires vector of counters
:GradInequalityConstraintsget_inequality_constraintwhen evaluating all of them with :
:Gradientget_gradient(M,p)
:Hessianget_hessian
:InequalityConstraintget_inequality_constraintrequires vector of counters
:InequalityConstraintsget_inequality_constraintwhen evaluating all of them with :
:Preconditionerget_preconditioner
:ProximalMapget_proximal_map
:StochasticGradientsget_gradients
:StochasticGradientget_gradient(M, p, i)
:SubGradientget_subgradient
:SubtrahendGradientget_subtrahend_gradient

Constructors

ManifoldCountObjective(objective::AbstractManifoldObjective, counts::Dict{Symbol, <:Integer})

Initialise the ManifoldCountObjective to wrap objective initializing the set of counts

ManifoldCountObjective(M::AbstractManifold, objective::AbstractManifoldObjective, count::AbstractVecor{Symbol}, init=0)

Count function calls on objective using the symbols in count initialising all entries to init.

source

Internal decorators

Manopt.ReturnManifoldObjectiveType
ReturnManifoldObjective{E,O2,O1<:AbstractManifoldObjective{E}} <:
+   AbstractDecoratedManifoldObjective{E,O2}

A wrapper to indicate that get_solver_result should return the inner objective.

The types are such that one can still dispatch on the undecorated type O2 of the original objective as well.

source

Specific Objective typed and their access functions

Cost objective

Manopt.ManifoldCostObjectiveType
ManifoldCostObjective{T, TC} <: AbstractManifoldCostObjective{T, TC}

specify an AbstractManifoldObjective that does only have information about the cost function $f: \mathbb M → ℝ$ implemented as a function (M, p) -> c to compute the cost value c at p on the manifold M.

  • cost: a function $f: \mathcal M → ℝ$ to minimize

Constructors

ManifoldCostObjective(f)

Generate a problem. While this Problem does not have any allocating functions, the type T can be set for consistency reasons with other problems.

Used with

NelderMead, particle_swarm

source

Access functions

Manopt.get_costFunction
get_cost(amp::AbstractManoptProblem, p)

evaluate the cost function f stored within the AbstractManifoldObjective of an AbstractManoptProblem amp at the point p.

source
get_cost(M::AbstractManifold, obj::AbstractManifoldObjective, p)

evaluate the cost function f defined on M stored within the AbstractManifoldObjective at the point p.

source
get_cost(M::AbstractManifold, mco::AbstractManifoldCostObjective, p)

Evaluate the cost function from within the AbstractManifoldCostObjective on M at p.

By default this implementation assumed that the cost is stored within mco.cost.

source
get_cost(TpM, trmo::TrustRegionModelObjective, X)

Evaluate the tangent space TrustRegionModelObjective

\[m(X) = f(p) + ⟨\operatorname{grad} f(p), X ⟩_p + \frac{1}{2} ⟨\operatorname{Hess} f(p)[X], X⟩_p.\]

source
get_cost(TpM, trmo::AdaptiveRagularizationWithCubicsModelObjective, X)

Evaluate the tangent space AdaptiveRagularizationWithCubicsModelObjective

\[m(X) = f(p) + ⟨\operatorname{grad} f(p), X ⟩_p + \frac{1}{2} ⟨\operatorname{Hess} f(p)[X], X⟩_p + + \frac{σ}{3} \lVert X \rVert^3,\]

at X, cf. Eq. (33) in [ABBC20].

source
get_cost(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)

evaluate the cost

\[f(X) = \frac{1}{2} \lVert \mathcal A[X] + b \rVert_{p}^2,\qquad X ∈ T_{p}\mathcal M,\]

at X.

source
get_cost(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p, i)

Evaluate the ith summand of the cost.

If you use a single function for the stochastic cost, then only the index ì=1` is available to evaluate the whole cost.

source
get_cost(M::AbstractManifold,emo::EmbeddedManifoldObjective, p)

Evaluate the cost function of an objective defined in the embedding by first embedding p before calling the cost function stored in the EmbeddedManifoldObjective.

source

and internally

Manopt.get_cost_functionFunction
get_cost_function(amco::AbstractManifoldCostObjective)

return the function to evaluate (just) the cost $f(p)=c$ as a function (M,p) -> c.

source

Gradient objectives

Manopt.ManifoldGradientObjectiveType
ManifoldGradientObjective{T<:AbstractEvaluationType} <: AbstractManifoldGradientObjective{T}

specify an objective containing a cost and its gradient

Fields

  • cost: a function $f: \mathcal M → ℝ$
  • gradient!!: the gradient $\operatorname{grad}f: \mathcal M → \mathcal T\mathcal M$ of the cost function $f$.

Depending on the AbstractEvaluationType T the gradient can have to forms

Constructors

ManifoldGradientObjective(cost, gradient; evaluation=AllocatingEvaluation())

Used with

gradient_descent, conjugate_gradient_descent, quasi_Newton

source
Manopt.ManifoldAlternatingGradientObjectiveType
ManifoldAlternatingGradientObjective{E<:AbstractEvaluationType,TCost,TGradient} <: AbstractManifoldGradientObjective{E}

An alternating gradient objective consists of

  • a cost function $F(x)$
  • a gradient $\operatorname{grad}F$ that is either
    • given as one function $\operatorname{grad}F$ returning a tangent vector X on M or
    • an array of gradient functions $\operatorname{grad}F_i$, ì=1,…,n s each returning a component of the gradient
    which might be allocating or mutating variants, but not a mix of both.
Note

This Objective is usually defined using the ProductManifold from Manifolds.jl, so Manifolds.jl to be loaded.

Constructors

ManifoldAlternatingGradientObjective(F, gradF::Function;
     evaluation=AllocatingEvaluation()
 )
 ManifoldAlternatingGradientObjective(F, gradF::AbstractVector{<:Function};
     evaluation=AllocatingEvaluation()
-)

Create a alternating gradient problem with an optional cost and the gradient either as one function (returning an array) or a vector of functions.

source
Manopt.ManifoldStochasticGradientObjectiveType
ManifoldStochasticGradientObjective{T<:AbstractEvaluationType} <: AbstractManifoldGradientObjective{T}

A stochastic gradient objective consists of

  • a(n optional) cost function $f(p) = \displaystyle\sum_{i=1}^n f_i(p)$
  • an array of gradients, $\operatorname{grad}f_i(p), i=1,\ldots,n$ which can be given in two forms
    • as one single function $(\mathcal M, p) ↦ (X_1,…,X_n) ∈ (T_p\mathcal M)^n$
    • as a vector of functions $\bigl( (\mathcal M, p) ↦ X_1, …, (\mathcal M, p) ↦ X_n\bigr)$.

Where both variants can also be provided as InplaceEvaluation functions (M, X, p) -> X, where X is the vector of X1,...,Xn and (M, X1, p) -> X1, ..., (M, Xn, p) -> Xn, respectively.

Constructors

ManifoldStochasticGradientObjective(
+)

Create a alternating gradient problem with an optional cost and the gradient either as one function (returning an array) or a vector of functions.

source
Manopt.ManifoldStochasticGradientObjectiveType
ManifoldStochasticGradientObjective{T<:AbstractEvaluationType} <: AbstractManifoldGradientObjective{T}

A stochastic gradient objective consists of

  • a(n optional) cost function $f(p) = \displaystyle\sum_{i=1}^n f_i(p)$
  • an array of gradients, $\operatorname{grad}f_i(p), i=1,\ldots,n$ which can be given in two forms
    • as one single function $(\mathcal M, p) ↦ (X_1,…,X_n) ∈ (T_p\mathcal M)^n$
    • as a vector of functions $\bigl( (\mathcal M, p) ↦ X_1, …, (\mathcal M, p) ↦ X_n\bigr)$.

Where both variants can also be provided as InplaceEvaluation functions (M, X, p) -> X, where X is the vector of X1,...,Xn and (M, X1, p) -> X1, ..., (M, Xn, p) -> Xn, respectively.

Constructors

ManifoldStochasticGradientObjective(
     grad_f::Function;
     cost=Missing(),
     evaluation=AllocatingEvaluation()
@@ -16,49 +16,49 @@
 ManifoldStochasticGradientObjective(
     grad_f::AbstractVector{<:Function};
     cost=Missing(), evaluation=AllocatingEvaluation()
-)

Create a Stochastic gradient problem with the gradient either as one function (returning an array of tangent vectors) or a vector of functions (each returning one tangent vector).

The optional cost can also be given as either a single function (returning a number) pr a vector of functions, each returning a value.

Used with

stochastic_gradient_descent

Note that this can also be used with a gradient_descent, since the (complete) gradient is just the sums of the single gradients.

source
Manopt.NonlinearLeastSquaresObjectiveType
NonlinearLeastSquaresObjective{T<:AbstractEvaluationType} <: AbstractManifoldObjective{T}

A type for nonlinear least squares problems. T is a AbstractEvaluationType for the F and Jacobian functions.

Specify a nonlinear least squares problem

Fields

  • f a function $f: \mathcal M → ℝ^d$ to minimize
  • jacobian!! Jacobian of the function $f$
  • jacobian_tangent_basis the basis of tangent space used for computing the Jacobian.
  • num_components number of values returned by f (equal to d).

Depending on the AbstractEvaluationType T the function $F$ has to be provided:

  • as a functions (M::AbstractManifold, p) -> v that allocates memory for v itself for an AllocatingEvaluation,
  • as a function (M::AbstractManifold, v, p) -> v that works in place of v for a InplaceEvaluation.

Also the Jacobian $jacF!!$ is required:

  • as a functions (M::AbstractManifold, p; basis_domain::AbstractBasis) -> v that allocates memory for v itself for an AllocatingEvaluation,
  • as a function (M::AbstractManifold, v, p; basis_domain::AbstractBasis) -> v that works in place of v for an InplaceEvaluation.

Constructors

NonlinearLeastSquaresProblem(M, F, jacF, num_components; evaluation=AllocatingEvaluation(), jacobian_tangent_basis=DefaultOrthonormalBasis())

See also

LevenbergMarquardt, LevenbergMarquardtState

source

There is also a second variant, if just one function is responsible for computing the cost and the gradient

Manopt.ManifoldCostGradientObjectiveType
ManifoldCostGradientObjective{T} <: AbstractManifoldObjective{T}

specify an objective containing one function to perform a combined computation of cost and its gradient

Fields

  • costgrad!!: a function that computes both the cost $f: \mathcal M → ℝ$ and its gradient $\operatorname{grad}f: \mathcal M → \mathcal T\mathcal M$

Depending on the AbstractEvaluationType T the gradient can have to forms

Constructors

ManifoldCostGradientObjective(costgrad; evaluation=AllocatingEvaluation())

Used with

gradient_descent, conjugate_gradient_descent, quasi_Newton

source

Access functions

Manopt.get_gradientFunction
get_gradient(s::AbstractManoptSolverState)

return the (last stored) gradient within AbstractManoptSolverStates`. By default also undecorates the state beforehand

source
get_gradient(amp::AbstractManoptProblem, p)
-get_gradient!(amp::AbstractManoptProblem, X, p)

evaluate the gradient of an AbstractManoptProblem amp at the point p.

The evaluation is done in place of X for the !-variant.

source
get_gradient(M::AbstractManifold, mgo::AbstractManifoldGradientObjective{T}, p)
-get_gradient!(M::AbstractManifold, X, mgo::AbstractManifoldGradientObjective{T}, p)

evaluate the gradient of a AbstractManifoldGradientObjective{T} mgo at p.

The evaluation is done in place of X for the !-variant. The T=AllocatingEvaluation problem might still allocate memory within. When the non-mutating variant is called with a T=InplaceEvaluation memory for the result is allocated.

Note that the order of parameters follows the philosophy of Manifolds.jl, namely that even for the mutating variant, the manifold is the first parameter and the (in-place) tangent vector X comes second.

source
get_gradient(agst::AbstractGradientSolverState)

return the gradient stored within gradient options. THe default returns agst.X.

source
get_gradient(M::AbstractManifold, vgf::VectorGradientFunction, p, i)
+)

Create a Stochastic gradient problem with the gradient either as one function (returning an array of tangent vectors) or a vector of functions (each returning one tangent vector).

The optional cost can also be given as either a single function (returning a number) pr a vector of functions, each returning a value.

Used with

stochastic_gradient_descent

Note that this can also be used with a gradient_descent, since the (complete) gradient is just the sums of the single gradients.

source
Manopt.NonlinearLeastSquaresObjectiveType
NonlinearLeastSquaresObjective{T<:AbstractEvaluationType} <: AbstractManifoldObjective{T}

A type for nonlinear least squares problems. T is a AbstractEvaluationType for the F and Jacobian functions.

Specify a nonlinear least squares problem

Fields

  • f a function $f: \mathcal M → ℝ^d$ to minimize
  • jacobian!! Jacobian of the function $f$
  • jacobian_tangent_basis the basis of tangent space used for computing the Jacobian.
  • num_components number of values returned by f (equal to d).

Depending on the AbstractEvaluationType T the function $F$ has to be provided:

  • as a functions (M::AbstractManifold, p) -> v that allocates memory for v itself for an AllocatingEvaluation,
  • as a function (M::AbstractManifold, v, p) -> v that works in place of v for a InplaceEvaluation.

Also the Jacobian $jacF!!$ is required:

  • as a functions (M::AbstractManifold, p; basis_domain::AbstractBasis) -> v that allocates memory for v itself for an AllocatingEvaluation,
  • as a function (M::AbstractManifold, v, p; basis_domain::AbstractBasis) -> v that works in place of v for an InplaceEvaluation.

Constructors

NonlinearLeastSquaresProblem(M, F, jacF, num_components; evaluation=AllocatingEvaluation(), jacobian_tangent_basis=DefaultOrthonormalBasis())

See also

LevenbergMarquardt, LevenbergMarquardtState

source

There is also a second variant, if just one function is responsible for computing the cost and the gradient

Manopt.ManifoldCostGradientObjectiveType
ManifoldCostGradientObjective{T} <: AbstractManifoldObjective{T}

specify an objective containing one function to perform a combined computation of cost and its gradient

Fields

  • costgrad!!: a function that computes both the cost $f: \mathcal M → ℝ$ and its gradient $\operatorname{grad}f: \mathcal M → \mathcal T\mathcal M$

Depending on the AbstractEvaluationType T the gradient can have to forms

Constructors

ManifoldCostGradientObjective(costgrad; evaluation=AllocatingEvaluation())

Used with

gradient_descent, conjugate_gradient_descent, quasi_Newton

source

Access functions

Manopt.get_gradientFunction
get_gradient(s::AbstractManoptSolverState)

return the (last stored) gradient within AbstractManoptSolverStates`. By default also undecorates the state beforehand

source
get_gradient(amp::AbstractManoptProblem, p)
+get_gradient!(amp::AbstractManoptProblem, X, p)

evaluate the gradient of an AbstractManoptProblem amp at the point p.

The evaluation is done in place of X for the !-variant.

source
get_gradient(M::AbstractManifold, mgo::AbstractManifoldGradientObjective{T}, p)
+get_gradient!(M::AbstractManifold, X, mgo::AbstractManifoldGradientObjective{T}, p)

evaluate the gradient of a AbstractManifoldGradientObjective{T} mgo at p.

The evaluation is done in place of X for the !-variant. The T=AllocatingEvaluation problem might still allocate memory within. When the non-mutating variant is called with a T=InplaceEvaluation memory for the result is allocated.

Note that the order of parameters follows the philosophy of Manifolds.jl, namely that even for the mutating variant, the manifold is the first parameter and the (in-place) tangent vector X comes second.

source
get_gradient(agst::AbstractGradientSolverState)

return the gradient stored within gradient options. THe default returns agst.X.

source
get_gradient(M::AbstractManifold, vgf::VectorGradientFunction, p, i)
 get_gradient(M::AbstractManifold, vgf::VectorGradientFunction, p, i, range)
 get_gradient!(M::AbstractManifold, X, vgf::VectorGradientFunction, p, i)
-get_gradient!(M::AbstractManifold, X, vgf::VectorGradientFunction, p, i, range)

Evaluate the gradients of the vector function vgf on the manifold M at p and the values given in range, specifying the representation of the gradients.

Since i is assumed to be a linear index, you can provide

  • a single integer
  • a UnitRange to specify a range to be returned like 1:3
  • a BitVector specifying a selection
  • a AbstractVector{<:Integer} to specify indices
  • : to return the vector of all gradients
source
get_gradient(TpM, trmo::TrustRegionModelObjective, X)

Evaluate the gradient of the TrustRegionModelObjective

\[\operatorname{grad} m(X) = \operatorname{grad} f(p) + \operatorname{Hess} f(p)[X].\]

source
get_gradient(TpM, trmo::AdaptiveRagularizationWithCubicsModelObjective, X)

Evaluate the gradient of the AdaptiveRagularizationWithCubicsModelObjective

\[\operatorname{grad} m(X) = \operatorname{grad} f(p) + \operatorname{Hess} f(p)[X] - + σ\lVert X \rVert X,\]

at X, cf. Eq. (37) in [ABBC20].

source
get_gradient(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)
-get_gradient!(TpM::TangentSpace, Y, slso::SymmetricLinearSystemObjective, X)

evaluate the gradient of

\[f(X) = \frac{1}{2} \lVert \mathcal A[X] + b \rVert_{p}^2,\qquad X ∈ T_{p}\mathcal M,\]

Which is $\operatorname{grad} f(X) = \mathcal A[X]+b$. This can be computed in-place of Y.

source
get_gradient(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p, k)
-get_gradient!(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, Y, p, k)

Evaluate one of the summands gradients $\operatorname{grad}f_k$, $k∈\{1,…,n\}$, at x (in place of Y).

If you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient required for allocation) can not be determined.

source
get_gradient(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p)
-get_gradient!(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, X, p)

Evaluate the complete gradient $\operatorname{grad} f = \displaystyle\sum_{i=1}^n \operatorname{grad} f_i(p)$ at p (in place of X).

If you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient required for allocation) can not be determined.

source
get_gradient(M::AbstractManifold, emo::EmbeddedManifoldObjective, p)
-get_gradient!(M::AbstractManifold, X, emo::EmbeddedManifoldObjective, p)

Evaluate the gradient function of an objective defined in the embedding, that is embed p before calling the gradient function stored in the EmbeddedManifoldObjective.

The returned gradient is then converted to a Riemannian gradient calling riemannian_gradient.

source
Manopt.get_gradientsFunction
get_gradients(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p)
-get_gradients!(M::AbstractManifold, X, sgo::ManifoldStochasticGradientObjective, p)

Evaluate all summands gradients $\{\operatorname{grad}f_i\}_{i=1}^n$ at p (in place of X).

If you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient) can not be determined.

source

and internally

Manopt.get_gradient_functionFunction
get_gradient_function(amgo::AbstractManifoldGradientObjective, recursive=false)

return the function to evaluate (just) the gradient $\operatorname{grad} f(p)$, where either the gradient function using the decorator or without the decorator is used.

By default recursive is set to false, since usually to just pass the gradient function somewhere, one still wants for example the cached one or the one that still counts calls.

Depending on the AbstractEvaluationType E this is a function

source

Internal helpers

Manopt.get_gradient_from_Jacobian!Function
get_gradient_from_Jacobian!(
+get_gradient!(M::AbstractManifold, X, vgf::VectorGradientFunction, p, i, range)

Evaluate the gradients of the vector function vgf on the manifold M at p and the values given in range, specifying the representation of the gradients.

Since i is assumed to be a linear index, you can provide

  • a single integer
  • a UnitRange to specify a range to be returned like 1:3
  • a BitVector specifying a selection
  • a AbstractVector{<:Integer} to specify indices
  • : to return the vector of all gradients
source
get_gradient(TpM, trmo::TrustRegionModelObjective, X)

Evaluate the gradient of the TrustRegionModelObjective

\[\operatorname{grad} m(X) = \operatorname{grad} f(p) + \operatorname{Hess} f(p)[X].\]

source
get_gradient(TpM, trmo::AdaptiveRagularizationWithCubicsModelObjective, X)

Evaluate the gradient of the AdaptiveRagularizationWithCubicsModelObjective

\[\operatorname{grad} m(X) = \operatorname{grad} f(p) + \operatorname{Hess} f(p)[X] + + σ\lVert X \rVert X,\]

at X, cf. Eq. (37) in [ABBC20].

source
get_gradient(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)
+get_gradient!(TpM::TangentSpace, Y, slso::SymmetricLinearSystemObjective, X)

evaluate the gradient of

\[f(X) = \frac{1}{2} \lVert \mathcal A[X] + b \rVert_{p}^2,\qquad X ∈ T_{p}\mathcal M,\]

Which is $\operatorname{grad} f(X) = \mathcal A[X]+b$. This can be computed in-place of Y.

source
get_gradient(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p, k)
+get_gradient!(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, Y, p, k)

Evaluate one of the summands gradients $\operatorname{grad}f_k$, $k∈\{1,…,n\}$, at x (in place of Y).

If you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient required for allocation) can not be determined.

source
get_gradient(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p)
+get_gradient!(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, X, p)

Evaluate the complete gradient $\operatorname{grad} f = \displaystyle\sum_{i=1}^n \operatorname{grad} f_i(p)$ at p (in place of X).

If you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient required for allocation) can not be determined.

source
get_gradient(M::AbstractManifold, emo::EmbeddedManifoldObjective, p)
+get_gradient!(M::AbstractManifold, X, emo::EmbeddedManifoldObjective, p)

Evaluate the gradient function of an objective defined in the embedding, that is embed p before calling the gradient function stored in the EmbeddedManifoldObjective.

The returned gradient is then converted to a Riemannian gradient calling riemannian_gradient.

source
Manopt.get_gradientsFunction
get_gradients(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p)
+get_gradients!(M::AbstractManifold, X, sgo::ManifoldStochasticGradientObjective, p)

Evaluate all summands gradients $\{\operatorname{grad}f_i\}_{i=1}^n$ at p (in place of X).

If you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient) can not be determined.

source

and internally

Manopt.get_gradient_functionFunction
get_gradient_function(amgo::AbstractManifoldGradientObjective, recursive=false)

return the function to evaluate (just) the gradient $\operatorname{grad} f(p)$, where either the gradient function using the decorator or without the decorator is used.

By default recursive is set to false, since usually to just pass the gradient function somewhere, one still wants for example the cached one or the one that still counts calls.

Depending on the AbstractEvaluationType E this is a function

source

Internal helpers

Manopt.get_gradient_from_Jacobian!Function
get_gradient_from_Jacobian!(
     M::AbstractManifold,
     X,
     nlso::NonlinearLeastSquaresObjective{InplaceEvaluation},
     p,
     Jval=zeros(nlso.num_components, manifold_dimension(M)),
-)

Compute gradient of NonlinearLeastSquaresObjective nlso at point p in place of X, with temporary Jacobian stored in the optional argument Jval.

source

Subgradient objective

Manopt.ManifoldSubgradientObjectiveType
ManifoldSubgradientObjective{T<:AbstractEvaluationType,C,S} <:AbstractManifoldCostObjective{T, C}

A structure to store information about a objective for a subgradient based optimization problem

Fields

  • cost: the function $f$ to be minimized
  • subgradient: a function returning a subgradient $∂f$ of $f$

Constructor

ManifoldSubgradientObjective(f, ∂f)

Generate the ManifoldSubgradientObjective for a subgradient objective, consisting of a (cost) function f(M, p) and a function ∂f(M, p) that returns a not necessarily deterministic element from the subdifferential at p on a manifold M.

source

Access functions

Manopt.get_subgradientFunction
X = get_subgradient(M::AbstractManifold, sgo::AbstractManifoldGradientObjective, p)
-get_subgradient!(M::AbstractManifold, X, sgo::AbstractManifoldGradientObjective, p)

Evaluate the subgradient, which for the case of a objective having a gradient, means evaluating the gradient itself.

While in general, the result might not be deterministic, for this case it is.

source
get_subgradient(amp::AbstractManoptProblem, p)
-get_subgradient!(amp::AbstractManoptProblem, X, p)

evaluate the subgradient of an AbstractManoptProblem amp at point p.

The evaluation is done in place of X for the !-variant. The result might not be deterministic, one element of the subdifferential is returned.

source
X = get_subgradient(M;;AbstractManifold, sgo::ManifoldSubgradientObjective, p)
-get_subgradient!(M;;AbstractManifold, X, sgo::ManifoldSubgradientObjective, p)

Evaluate the (sub)gradient of a ManifoldSubgradientObjective sgo at the point p.

The evaluation is done in place of X for the !-variant. The result might not be deterministic, one element of the subdifferential is returned.

source

Proximal map objective

Manopt.ManifoldProximalMapObjectiveType
ManifoldProximalMapObjective{E<:AbstractEvaluationType, TC, TP, V <: Vector{<:Integer}} <: AbstractManifoldCostObjective{E, TC}

specify a problem for solvers based on the evaluation of proximal maps, which represents proximal maps $\operatorname{prox}_{λf_i}$ for summands $f = f_1 + f_2+ … + f_N$ of the cost function $f$.

Fields

  • cost: a function $f:\mathcal M→ℝ$ to minimize
  • proxes: proximal maps $\operatorname{prox}_{λf_i}:\mathcal M → \mathcal M$ as functions (M, λ, p) -> q or in-place (M, q, λ, p).
  • number_of_proxes: number of proximal maps per function, to specify when one of the maps is a combined one such that the proximal maps functions return more than one entry per function, you have to adapt this value. if not specified, it is set to one prox per function.

Constructor

ManifoldProximalMapObjective(f, proxes_f::Union{Tuple,AbstractVector}, numer_of_proxes=onex(length(proxes));
-   evaluation=Allocating)

Generate a proximal problem with a tuple or vector of funtions, where by default every function computes a single prox of one component of $f$.

ManifoldProximalMapObjective(f, prox_f); evaluation=Allocating)

Generate a proximal objective for $f$ and its proxial map $\operatorname{prox}_{λf}$

See also

cyclic_proximal_point, get_cost, get_proximal_map

source

Access functions

Subgradient objective

Manopt.ManifoldSubgradientObjectiveType
ManifoldSubgradientObjective{T<:AbstractEvaluationType,C,S} <:AbstractManifoldCostObjective{T, C}

A structure to store information about a objective for a subgradient based optimization problem

Fields

  • cost: the function $f$ to be minimized
  • subgradient: a function returning a subgradient $∂f$ of $f$

Constructor

ManifoldSubgradientObjective(f, ∂f)

Generate the ManifoldSubgradientObjective for a subgradient objective, consisting of a (cost) function f(M, p) and a function ∂f(M, p) that returns a not necessarily deterministic element from the subdifferential at p on a manifold M.

source

Access functions

Manopt.get_subgradientFunction
X = get_subgradient(M::AbstractManifold, sgo::AbstractManifoldGradientObjective, p)
+get_subgradient!(M::AbstractManifold, X, sgo::AbstractManifoldGradientObjective, p)

Evaluate the subgradient, which for the case of a objective having a gradient, means evaluating the gradient itself.

While in general, the result might not be deterministic, for this case it is.

source
get_subgradient(amp::AbstractManoptProblem, p)
+get_subgradient!(amp::AbstractManoptProblem, X, p)

evaluate the subgradient of an AbstractManoptProblem amp at point p.

The evaluation is done in place of X for the !-variant. The result might not be deterministic, one element of the subdifferential is returned.

source
X = get_subgradient(M;;AbstractManifold, sgo::ManifoldSubgradientObjective, p)
+get_subgradient!(M;;AbstractManifold, X, sgo::ManifoldSubgradientObjective, p)

Evaluate the (sub)gradient of a ManifoldSubgradientObjective sgo at the point p.

The evaluation is done in place of X for the !-variant. The result might not be deterministic, one element of the subdifferential is returned.

source

Proximal map objective

Manopt.ManifoldProximalMapObjectiveType
ManifoldProximalMapObjective{E<:AbstractEvaluationType, TC, TP, V <: Vector{<:Integer}} <: AbstractManifoldCostObjective{E, TC}

specify a problem for solvers based on the evaluation of proximal maps, which represents proximal maps $\operatorname{prox}_{λf_i}$ for summands $f = f_1 + f_2+ … + f_N$ of the cost function $f$.

Fields

  • cost: a function $f:\mathcal M→ℝ$ to minimize
  • proxes: proximal maps $\operatorname{prox}_{λf_i}:\mathcal M → \mathcal M$ as functions (M, λ, p) -> q or in-place (M, q, λ, p).
  • number_of_proxes: number of proximal maps per function, to specify when one of the maps is a combined one such that the proximal maps functions return more than one entry per function, you have to adapt this value. if not specified, it is set to one prox per function.

Constructor

ManifoldProximalMapObjective(f, proxes_f::Union{Tuple,AbstractVector}, numer_of_proxes=onex(length(proxes));
+   evaluation=Allocating)

Generate a proximal problem with a tuple or vector of funtions, where by default every function computes a single prox of one component of $f$.

ManifoldProximalMapObjective(f, prox_f); evaluation=Allocating)

Generate a proximal objective for $f$ and its proxial map $\operatorname{prox}_{λf}$

See also

cyclic_proximal_point, get_cost, get_proximal_map

source

Access functions

Manopt.get_proximal_mapFunction
q = get_proximal_map(M::AbstractManifold, mpo::ManifoldProximalMapObjective, λ, p)
 get_proximal_map!(M::AbstractManifold, q, mpo::ManifoldProximalMapObjective, λ, p)
 q = get_proximal_map(M::AbstractManifold, mpo::ManifoldProximalMapObjective, λ, p, i)
-get_proximal_map!(M::AbstractManifold, q, mpo::ManifoldProximalMapObjective, λ, p, i)

evaluate the (ith) proximal map of ManifoldProximalMapObjective p at the point p of p.M with parameter $λ>0$.

source

Hessian objective

Manopt.ManifoldHessianObjectiveType
ManifoldHessianObjective{T<:AbstractEvaluationType,C,G,H,Pre} <: AbstractManifoldHessianObjective{T,C,G,H}

specify a problem for Hessian based algorithms.

Fields

  • cost: a function $f:\mathcal M→ℝ$ to minimize
  • gradient: the gradient $\operatorname{grad}f:\mathcal M → \mathcal T\mathcal M$ of the cost function $f$
  • hessian: the Hessian $\operatorname{Hess}f(x)[⋅]: \mathcal T_{x} \mathcal M → \mathcal T_{x} \mathcal M$ of the cost function $f$
  • preconditioner: the symmetric, positive definite preconditioner as an approximation of the inverse of the Hessian of $f$, a map with the same input variables as the hessian to numerically stabilize iterations when the Hessian is ill-conditioned

Depending on the AbstractEvaluationType T the gradient and can have to forms

Constructor

ManifoldHessianObjective(f, grad_f, Hess_f, preconditioner = (M, p, X) -> X;
-    evaluation=AllocatingEvaluation())

See also

truncated_conjugate_gradient_descent, trust_regions

source

Access functions

Manopt.get_hessianFunction
Y = get_hessian(amp::AbstractManoptProblem{T}, p, X)
-get_hessian!(amp::AbstractManoptProblem{T}, Y, p, X)

evaluate the Hessian of an AbstractManoptProblem amp at p applied to a tangent vector X, computing $\operatorname{Hess}f(q)[X]$, which can also happen in-place of Y.

source
get_hessian(M::AbstractManifold, vgf::VectorHessianFunction, p, X, i)
+get_proximal_map!(M::AbstractManifold, q, mpo::ManifoldProximalMapObjective, λ, p, i)

evaluate the (ith) proximal map of ManifoldProximalMapObjective p at the point p of p.M with parameter $λ>0$.

source

Hessian objective

Manopt.ManifoldHessianObjectiveType
ManifoldHessianObjective{T<:AbstractEvaluationType,C,G,H,Pre} <: AbstractManifoldHessianObjective{T,C,G,H}

specify a problem for Hessian based algorithms.

Fields

  • cost: a function $f:\mathcal M→ℝ$ to minimize
  • gradient: the gradient $\operatorname{grad}f:\mathcal M → \mathcal T\mathcal M$ of the cost function $f$
  • hessian: the Hessian $\operatorname{Hess}f(x)[⋅]: \mathcal T_{x} \mathcal M → \mathcal T_{x} \mathcal M$ of the cost function $f$
  • preconditioner: the symmetric, positive definite preconditioner as an approximation of the inverse of the Hessian of $f$, a map with the same input variables as the hessian to numerically stabilize iterations when the Hessian is ill-conditioned

Depending on the AbstractEvaluationType T the gradient and can have to forms

Constructor

ManifoldHessianObjective(f, grad_f, Hess_f, preconditioner = (M, p, X) -> X;
+    evaluation=AllocatingEvaluation())

See also

truncated_conjugate_gradient_descent, trust_regions

source

Access functions

Manopt.get_hessianFunction
Y = get_hessian(amp::AbstractManoptProblem{T}, p, X)
+get_hessian!(amp::AbstractManoptProblem{T}, Y, p, X)

evaluate the Hessian of an AbstractManoptProblem amp at p applied to a tangent vector X, computing $\operatorname{Hess}f(q)[X]$, which can also happen in-place of Y.

source
get_hessian(M::AbstractManifold, vgf::VectorHessianFunction, p, X, i)
 get_hessian(M::AbstractManifold, vgf::VectorHessianFunction, p, X, i, range)
 get_hessian!(M::AbstractManifold, X, vgf::VectorHessianFunction, p, X, i)
-get_hessian!(M::AbstractManifold, X, vgf::VectorHessianFunction, p, X, i, range)

Evaluate the Hessians of the vector function vgf on the manifold M at p in direction X and the values given in range, specifying the representation of the gradients.

Since i is assumed to be a linear index, you can provide

  • a single integer
  • a UnitRange to specify a range to be returned like 1:3
  • a BitVector specifying a selection
  • a AbstractVector{<:Integer} to specify indices
  • : to return the vector of all gradients
source
get_hessian(TpM, trmo::TrustRegionModelObjective, X)

Evaluate the Hessian of the TrustRegionModelObjective

\[\operatorname{Hess} m(X)[Y] = \operatorname{Hess} f(p)[Y].\]

source
get_Hessian(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X, V)
-get_Hessian!(TpM::TangentSpace, W, slso::SymmetricLinearSystemObjective, X, V)

evaluate the Hessian of

\[f(X) = \frac{1}{2} \lVert \mathcal A[X] + b \rVert_{p}^2,\qquad X ∈ T_{p}\mathcal M,\]

Which is $\operatorname{Hess} f(X)[Y] = \mathcal A[V]$. This can be computed in-place of W.

source
get_hessian(M::AbstractManifold, emo::EmbeddedManifoldObjective, p, X)
-get_hessian!(M::AbstractManifold, Y, emo::EmbeddedManifoldObjective, p, X)

Evaluate the Hessian of an objective defined in the embedding, that is embed p and X before calling the Hessian function stored in the EmbeddedManifoldObjective.

The returned Hessian is then converted to a Riemannian Hessian calling riemannian_Hessian.

source
Manopt.get_preconditionerFunction
get_preconditioner(amp::AbstractManoptProblem, p, X)

evaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function f) of a AbstractManoptProblem amps objective at the point p applied to a tangent vector X.

source
get_preconditioner(M::AbstractManifold, mho::ManifoldHessianObjective, p, X)

evaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function F) of a ManifoldHessianObjective mho at the point p applied to a tangent vector X.

source

and internally

Primal-dual based objectives

Manopt.AbstractPrimalDualManifoldObjectiveType
AbstractPrimalDualManifoldObjective{E<:AbstractEvaluationType,C,P} <: AbstractManifoldCostObjective{E,C}

A common abstract super type for objectives that consider primal-dual problems.

source
Manopt.PrimalDualManifoldObjectiveType
PrimalDualManifoldObjective{T<:AbstractEvaluationType} <: AbstractPrimalDualManifoldObjective{T}

Describes an Objective linearized or exact Chambolle-Pock algorithm, cf. [BHS+21], [CP11]

Fields

All fields with !! can either be in-place or allocating functions, which should be set depending on the evaluation= keyword in the constructor and stored in T <: AbstractEvaluationType.

  • cost: $F + G(Λ(⋅))$ to evaluate interim cost function values
  • linearized_forward_operator!!: linearized operator for the forward operation in the algorithm $DΛ$
  • linearized_adjoint_operator!!: the adjoint differential $(DΛ)^* : \mathcal N → T\mathcal M$
  • prox_f!!: the proximal map belonging to $f$
  • prox_G_dual!!: the proximal map belonging to $g_n^*$
  • Λ!!: the forward operator (if given) $Λ: \mathcal M → \mathcal N$

Either the linearized operator $DΛ$ or $Λ$ are required usually.

Constructor

PrimalDualManifoldObjective(cost, prox_f, prox_G_dual, adjoint_linearized_operator;
+get_hessian!(M::AbstractManifold, X, vgf::VectorHessianFunction, p, X, i, range)

Evaluate the Hessians of the vector function vgf on the manifold M at p in direction X and the values given in range, specifying the representation of the gradients.

Since i is assumed to be a linear index, you can provide

  • a single integer
  • a UnitRange to specify a range to be returned like 1:3
  • a BitVector specifying a selection
  • a AbstractVector{<:Integer} to specify indices
  • : to return the vector of all gradients
source
get_hessian(TpM, trmo::TrustRegionModelObjective, X)

Evaluate the Hessian of the TrustRegionModelObjective

\[\operatorname{Hess} m(X)[Y] = \operatorname{Hess} f(p)[Y].\]

source
get_Hessian(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X, V)
+get_Hessian!(TpM::TangentSpace, W, slso::SymmetricLinearSystemObjective, X, V)

evaluate the Hessian of

\[f(X) = \frac{1}{2} \lVert \mathcal A[X] + b \rVert_{p}^2,\qquad X ∈ T_{p}\mathcal M,\]

Which is $\operatorname{Hess} f(X)[Y] = \mathcal A[V]$. This can be computed in-place of W.

source
get_hessian(M::AbstractManifold, emo::EmbeddedManifoldObjective, p, X)
+get_hessian!(M::AbstractManifold, Y, emo::EmbeddedManifoldObjective, p, X)

Evaluate the Hessian of an objective defined in the embedding, that is embed p and X before calling the Hessian function stored in the EmbeddedManifoldObjective.

The returned Hessian is then converted to a Riemannian Hessian calling riemannian_Hessian.

source
Manopt.get_preconditionerFunction
get_preconditioner(amp::AbstractManoptProblem, p, X)

evaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function f) of a AbstractManoptProblem amps objective at the point p applied to a tangent vector X.

source
get_preconditioner(M::AbstractManifold, mho::ManifoldHessianObjective, p, X)

evaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function F) of a ManifoldHessianObjective mho at the point p applied to a tangent vector X.

source

and internally

Primal-dual based objectives

Manopt.AbstractPrimalDualManifoldObjectiveType
AbstractPrimalDualManifoldObjective{E<:AbstractEvaluationType,C,P} <: AbstractManifoldCostObjective{E,C}

A common abstract super type for objectives that consider primal-dual problems.

source
Manopt.PrimalDualManifoldObjectiveType
PrimalDualManifoldObjective{T<:AbstractEvaluationType} <: AbstractPrimalDualManifoldObjective{T}

Describes an Objective linearized or exact Chambolle-Pock algorithm, cf. [BHS+21], [CP11]

Fields

All fields with !! can either be in-place or allocating functions, which should be set depending on the evaluation= keyword in the constructor and stored in T <: AbstractEvaluationType.

  • cost: $F + G(Λ(⋅))$ to evaluate interim cost function values
  • linearized_forward_operator!!: linearized operator for the forward operation in the algorithm $DΛ$
  • linearized_adjoint_operator!!: the adjoint differential $(DΛ)^* : \mathcal N → T\mathcal M$
  • prox_f!!: the proximal map belonging to $f$
  • prox_G_dual!!: the proximal map belonging to $g_n^*$
  • Λ!!: the forward operator (if given) $Λ: \mathcal M → \mathcal N$

Either the linearized operator $DΛ$ or $Λ$ are required usually.

Constructor

PrimalDualManifoldObjective(cost, prox_f, prox_G_dual, adjoint_linearized_operator;
     linearized_forward_operator::Union{Function,Missing}=missing,
     Λ::Union{Function,Missing}=missing,
     evaluation::AbstractEvaluationType=AllocatingEvaluation()
-)

The last optional argument can be used to provide the 4 or 5 functions as allocating or mutating (in place computation) ones. Note that the first argument is always the manifold under consideration, the mutated one is the second.

source
Manopt.PrimalDualManifoldSemismoothNewtonObjectiveType
PrimalDualManifoldSemismoothNewtonObjective{E<:AbstractEvaluationType, TC, LO, ALO, PF, DPF, PG, DPG, L} <: AbstractPrimalDualManifoldObjective{E, TC, PF}

Describes a Problem for the Primal-dual Riemannian semismooth Newton algorithm. [DL21]

Fields

  • cost: $F + G(Λ(⋅))$ to evaluate interim cost function values
  • linearized_operator: the linearization $DΛ(⋅)[⋅]$ of the operator $Λ(⋅)$.
  • linearized_adjoint_operator: the adjoint differential $(DΛ)^* : \mathcal N → T\mathcal M$
  • prox_F: the proximal map belonging to $F$
  • diff_prox_F: the (Clarke Generalized) differential of the proximal maps of $F$
  • prox_G_dual: the proximal map belonging to G^\ast_n`
  • diff_prox_dual_G: the (Clarke Generalized) differential of the proximal maps of $G^\ast_n$
  • Λ: the exact forward operator. This operator is required if Λ(m)=n does not hold.

Constructor

PrimalDualManifoldSemismoothNewtonObjective(cost, prox_F, prox_G_dual, forward_operator, adjoint_linearized_operator,Λ)
source

Access functions

Manopt.adjoint_linearized_operatorFunction
X = adjoint_linearized_operator(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, m, n, Y)
-adjoint_linearized_operator(N::AbstractManifold, X, apdmo::AbstractPrimalDualManifoldObjective, m, n, Y)

Evaluate the adjoint of the linearized forward operator of $(DΛ(m))^*[Y]$ stored within the AbstractPrimalDualManifoldObjective (in place of X). Since $Y∈T_n\mathcal N$, both $m$ and $n=Λ(m)$ are necessary arguments, mainly because the forward operator $Λ$ might be missing in p.

source
Manopt.forward_operatorFunction
q = forward_operator(M::AbstractManifold, N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, p)
-forward_operator!(M::AbstractManifold, N::AbstractManifold, q, apdmo::AbstractPrimalDualManifoldObjective, p)

Evaluate the forward operator of $Λ(x)$ stored within the TwoManifoldProblem (in place of q).

source
Manopt.get_differential_dual_proxFunction
η = get_differential_dual_prox(N::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective, n, τ, X, ξ)
-get_differential_dual_prox!(N::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective, η, n, τ, X, ξ)

Evaluate the differential proximal map of $G_n^*$ stored within PrimalDualManifoldSemismoothNewtonObjective

\[D\operatorname{prox}_{τG_n^*}(X)[ξ]\]

which can also be computed in place of η.

source
Manopt.get_differential_primal_proxFunction
y = get_differential_primal_prox(M::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective σ, x)
-get_differential_primal_prox!(p::TwoManifoldProblem, y, σ, x)

Evaluate the differential proximal map of $F$ stored within AbstractPrimalDualManifoldObjective

\[D\operatorname{prox}_{σF}(x)[X]\]

which can also be computed in place of y.

source
Manopt.get_dual_proxFunction
Y = get_dual_prox(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, n, τ, X)
-get_dual_prox!(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, Y, n, τ, X)

Evaluate the proximal map of $g_n^*$ stored within AbstractPrimalDualManifoldObjective

\[ Y = \operatorname{prox}}_{τG_n^*}(X)\]

which can also be computed in place of Y.

source
Manopt.get_primal_proxFunction
q = get_primal_prox(M::AbstractManifold, p::AbstractPrimalDualManifoldObjective, σ, p)
-get_primal_prox!(M::AbstractManifold, p::AbstractPrimalDualManifoldObjective, q, σ, p)

Evaluate the proximal map of $F$ stored within AbstractPrimalDualManifoldObjective

\[\operatorname{prox}_{σF}(x)\]

which can also be computed in place of y.

source
Manopt.linearized_forward_operatorFunction
Y = linearized_forward_operator(M::AbstractManifold, N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, m, X, n)
-linearized_forward_operator!(M::AbstractManifold, N::AbstractManifold, Y, apdmo::AbstractPrimalDualManifoldObjective, m, X, n)

Evaluate the linearized operator (differential) $DΛ(m)[X]$ stored within the AbstractPrimalDualManifoldObjective (in place of Y), where n = Λ(m).

source

Constrained objective

Manopt.ConstrainedManifoldObjectiveType
ConstrainedManifoldObjective{T<:AbstractEvaluationType, C<:ConstraintType} <: AbstractManifoldObjective{T}

Describes the constrained objective

\[\begin{aligned} +)

The last optional argument can be used to provide the 4 or 5 functions as allocating or mutating (in place computation) ones. Note that the first argument is always the manifold under consideration, the mutated one is the second.

source
Manopt.PrimalDualManifoldSemismoothNewtonObjectiveType
PrimalDualManifoldSemismoothNewtonObjective{E<:AbstractEvaluationType, TC, LO, ALO, PF, DPF, PG, DPG, L} <: AbstractPrimalDualManifoldObjective{E, TC, PF}

Describes a Problem for the Primal-dual Riemannian semismooth Newton algorithm. [DL21]

Fields

  • cost: $F + G(Λ(⋅))$ to evaluate interim cost function values
  • linearized_operator: the linearization $DΛ(⋅)[⋅]$ of the operator $Λ(⋅)$.
  • linearized_adjoint_operator: the adjoint differential $(DΛ)^* : \mathcal N → T\mathcal M$
  • prox_F: the proximal map belonging to $F$
  • diff_prox_F: the (Clarke Generalized) differential of the proximal maps of $F$
  • prox_G_dual: the proximal map belonging to G^\ast_n`
  • diff_prox_dual_G: the (Clarke Generalized) differential of the proximal maps of $G^\ast_n$
  • Λ: the exact forward operator. This operator is required if Λ(m)=n does not hold.

Constructor

PrimalDualManifoldSemismoothNewtonObjective(cost, prox_F, prox_G_dual, forward_operator, adjoint_linearized_operator,Λ)
source

Access functions

Manopt.adjoint_linearized_operatorFunction
X = adjoint_linearized_operator(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, m, n, Y)
+adjoint_linearized_operator(N::AbstractManifold, X, apdmo::AbstractPrimalDualManifoldObjective, m, n, Y)

Evaluate the adjoint of the linearized forward operator of $(DΛ(m))^*[Y]$ stored within the AbstractPrimalDualManifoldObjective (in place of X). Since $Y∈T_n\mathcal N$, both $m$ and $n=Λ(m)$ are necessary arguments, mainly because the forward operator $Λ$ might be missing in p.

source
Manopt.forward_operatorFunction
q = forward_operator(M::AbstractManifold, N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, p)
+forward_operator!(M::AbstractManifold, N::AbstractManifold, q, apdmo::AbstractPrimalDualManifoldObjective, p)

Evaluate the forward operator of $Λ(x)$ stored within the TwoManifoldProblem (in place of q).

source
Manopt.get_differential_dual_proxFunction
η = get_differential_dual_prox(N::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective, n, τ, X, ξ)
+get_differential_dual_prox!(N::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective, η, n, τ, X, ξ)

Evaluate the differential proximal map of $G_n^*$ stored within PrimalDualManifoldSemismoothNewtonObjective

\[D\operatorname{prox}_{τG_n^*}(X)[ξ]\]

which can also be computed in place of η.

source
Manopt.get_differential_primal_proxFunction
y = get_differential_primal_prox(M::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective σ, x)
+get_differential_primal_prox!(p::TwoManifoldProblem, y, σ, x)

Evaluate the differential proximal map of $F$ stored within AbstractPrimalDualManifoldObjective

\[D\operatorname{prox}_{σF}(x)[X]\]

which can also be computed in place of y.

source
Manopt.get_dual_proxFunction
Y = get_dual_prox(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, n, τ, X)
+get_dual_prox!(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, Y, n, τ, X)

Evaluate the proximal map of $g_n^*$ stored within AbstractPrimalDualManifoldObjective

\[ Y = \operatorname{prox}}_{τG_n^*}(X)\]

which can also be computed in place of Y.

source
Manopt.get_primal_proxFunction
q = get_primal_prox(M::AbstractManifold, p::AbstractPrimalDualManifoldObjective, σ, p)
+get_primal_prox!(M::AbstractManifold, p::AbstractPrimalDualManifoldObjective, q, σ, p)

Evaluate the proximal map of $F$ stored within AbstractPrimalDualManifoldObjective

\[\operatorname{prox}_{σF}(x)\]

which can also be computed in place of y.

source
Manopt.linearized_forward_operatorFunction
Y = linearized_forward_operator(M::AbstractManifold, N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, m, X, n)
+linearized_forward_operator!(M::AbstractManifold, N::AbstractManifold, Y, apdmo::AbstractPrimalDualManifoldObjective, m, X, n)

Evaluate the linearized operator (differential) $DΛ(m)[X]$ stored within the AbstractPrimalDualManifoldObjective (in place of Y), where n = Λ(m).

source

Constrained objective

Manopt.ConstrainedManifoldObjectiveType
ConstrainedManifoldObjective{T<:AbstractEvaluationType, C<:ConstraintType} <: AbstractManifoldObjective{T}

Describes the constrained objective

\[\begin{aligned} \operatorname*{arg\,min}_{p ∈\mathcal{M}} & f(p)\\ \text{subject to } &g_i(p)\leq0 \quad \text{ for all } i=1,…,m,\\ \quad &h_j(p)=0 \quad \text{ for all } j=1,…,n. @@ -78,7 +78,7 @@ )

Generate the constrained objective based on all involved single functions f, grad_f, g, grad_g, h, grad_h, and optionally a Hessian for each of these. With equality_constraints and inequality_constraints you have to provide the dimension of the ranges of h and g, respectively. You can also provide a manifold M and a point p to use one evaluation of the constraints to automatically try to determine these sizes.

ConstrainedManifoldObjective(M::AbstractManifold, mho::AbstractManifoldObjective;
     equality_constraints = nothing,
     inequality_constraints = nothing
-)

Generate the constrained objective either with explicit constraints $g$ and $h$, and their gradients, or in the form where these are already encapsulated in VectorGradientFunctions.

Both variants require that at least one of the constraints (and its gradient) is provided. If any of the three parts provides a Hessian, the corresponding object, that is a ManifoldHessianObjective for f or a VectorHessianFunction for g or h, respectively, is created.

source

It might be beneficial to use the adapted problem to specify different ranges for the gradients of the constraints

Manopt.ConstrainedManoptProblemType
ConstrainedProblem{
+)

Generate the constrained objective either with explicit constraints $g$ and $h$, and their gradients, or in the form where these are already encapsulated in VectorGradientFunctions.

Both variants require that at least one of the constraints (and its gradient) is provided. If any of the three parts provides a Hessian, the corresponding object, that is a ManifoldHessianObjective for f or a VectorHessianFunction for g or h, respectively, is created.

source

It might be beneficial to use the adapted problem to specify different ranges for the gradients of the constraints

Manopt.ConstrainedManoptProblemType
ConstrainedProblem{
     TM <: AbstractManifold,
     O <: AbstractManifoldObjective
     HR<:Union{AbstractPowerRepresentation,Nothing},
@@ -97,39 +97,39 @@
     gradient_inequality_range=range
     hessian_equality_range=range,
     hessian_inequality_range=range
-)

Creates a constrained Manopt problem specifying an AbstractPowerRepresentation for both the gradient_equality_range and the gradient_inequality_range, respectively.

source

as well as the helper functions

Manopt.AbstractConstrainedFunctorType
AbstractConstrainedFunctor{T}

A common supertype for fucntors that model constraint functions.

This supertype provides access for the fields $λ$ and $μ$, the dual variables of constraintsnof type T.

source
Manopt.AbstractConstrainedSlackFunctorType
AbstractConstrainedSlackFunctor{T,R}

A common supertype for fucntors that model constraint functions with slack.

This supertype additionally provides access for the fields

  • μ::T the dual for the inequality constraints
  • s::T the slack parametyer, and
  • β::R the the barrier parameter

which is also of typee T.

source
Manopt.LagrangianCostType
LagrangianCost{CO,T} <: AbstractConstrainedFunctor{T}

Implement the Lagrangian of a ConstrainedManifoldObjective co.

\[\mathcal L(p; μ, λ) -= f(p) + \sum_{i=1}^m μ_ig_i(p) + \sum_{j=1}^n λ_jh_j(p)\]

Fields

  • co::CO, μ::T, λ::T as mentioned, where T represents a vector type.

Constructor

LagrangianCost(co, μ, λ)

Create a functor for the Lagrangian with fixed dual variables.

Example

When you directly want to evaluate the Lagrangian $\mathcal L$ you can also call

LagrangianCost(co, μ, λ)(M,p)
source
Manopt.LagrangianGradientType
LagrangianGradient{CO,T}

The gradient of the Lagrangian of a ConstrainedManifoldObjective co with respect to the variable $p$. The formula reads

\[\operatorname{grad}_p \mathcal L(p; μ, λ) -= \operatorname{grad} f(p) + \sum_{i=1}^m μ_i \operatorname{grad} g_i(p) + \sum_{j=1}^n λ_j \operatorname{grad} h_j(p)\]

Fields

  • co::CO, μ::T, λ::T as mentioned, where T represents a vector type.

Constructor

LagrangianGradient(co, μ, λ)

Create a functor for the Lagrangian with fixed dual variables.

Example

When you directly want to evaluate the gradient of the Lagrangian $\operatorname{grad}_p \mathcal L$ you can also call LagrangianGradient(co, μ, λ)(M,p) or LagrangianGradient(co, μ, λ)(M,X,p) for the in-place variant.

source
Manopt.LagrangianHessianType
LagrangianHessian{CO, V, T}

The Hesian of the Lagrangian of a ConstrainedManifoldObjective co with respect to the variable $p$. The formula reads

\[\operatorname{Hess}_p \mathcal L(p; μ, λ)[X] -= \operatorname{Hess} f(p) + \sum_{i=1}^m μ_i \operatorname{Hess} g_i(p)[X] + \sum_{j=1}^n λ_j \operatorname{Hess} h_j(p)[X]\]

Fields

  • co::CO, μ::T, λ::T as mentioned, where T represents a vector type.

Constructor

LagrangianHessian(co, μ, λ)

Create a functor for the Lagrangian with fixed dual variables.

Example

When you directly want to evaluate the Hessian of the Lagrangian $\operatorname{Hess}_p \mathcal L$ you can also call LagrangianHessian(co, μ, λ)(M, p, X) or LagrangianHessian(co, μ, λ)(M, Y, p, X) for the in-place variant.

source

Access functions

Manopt.get_equality_constraintFunction
get_equality_constraint(amp::AbstractManoptProblem, p, j=:)
-get_equality_constraint(M::AbstractManifold, objective, p, j=:)

Evaluate equality constraints of a ConstrainedManifoldObjective objective at point p and indices j (by default : which corresponds to all indices).

source
Manopt.get_inequality_constraintFunction
get_inequality_constraint(amp::AbstractManoptProblem, p, j=:)
-get_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())

Evaluate inequality constraints of a ConstrainedManifoldObjective objective at point p and indices j (by default : which corresponds to all indices).

source

as well as the helper functions

Manopt.AbstractConstrainedFunctorType
AbstractConstrainedFunctor{T}

A common supertype for fucntors that model constraint functions.

This supertype provides access for the fields $λ$ and $μ$, the dual variables of constraintsnof type T.

source
Manopt.AbstractConstrainedSlackFunctorType
AbstractConstrainedSlackFunctor{T,R}

A common supertype for fucntors that model constraint functions with slack.

This supertype additionally provides access for the fields

  • μ::T the dual for the inequality constraints
  • s::T the slack parametyer, and
  • β::R the the barrier parameter

which is also of typee T.

source
Manopt.LagrangianCostType
LagrangianCost{CO,T} <: AbstractConstrainedFunctor{T}

Implement the Lagrangian of a ConstrainedManifoldObjective co.

\[\mathcal L(p; μ, λ) += f(p) + \sum_{i=1}^m μ_ig_i(p) + \sum_{j=1}^n λ_jh_j(p)\]

Fields

  • co::CO, μ::T, λ::T as mentioned, where T represents a vector type.

Constructor

LagrangianCost(co, μ, λ)

Create a functor for the Lagrangian with fixed dual variables.

Example

When you directly want to evaluate the Lagrangian $\mathcal L$ you can also call

LagrangianCost(co, μ, λ)(M,p)
source
Manopt.LagrangianGradientType
LagrangianGradient{CO,T}

The gradient of the Lagrangian of a ConstrainedManifoldObjective co with respect to the variable $p$. The formula reads

\[\operatorname{grad}_p \mathcal L(p; μ, λ) += \operatorname{grad} f(p) + \sum_{i=1}^m μ_i \operatorname{grad} g_i(p) + \sum_{j=1}^n λ_j \operatorname{grad} h_j(p)\]

Fields

  • co::CO, μ::T, λ::T as mentioned, where T represents a vector type.

Constructor

LagrangianGradient(co, μ, λ)

Create a functor for the Lagrangian with fixed dual variables.

Example

When you directly want to evaluate the gradient of the Lagrangian $\operatorname{grad}_p \mathcal L$ you can also call LagrangianGradient(co, μ, λ)(M,p) or LagrangianGradient(co, μ, λ)(M,X,p) for the in-place variant.

source
Manopt.LagrangianHessianType
LagrangianHessian{CO, V, T}

The Hesian of the Lagrangian of a ConstrainedManifoldObjective co with respect to the variable $p$. The formula reads

\[\operatorname{Hess}_p \mathcal L(p; μ, λ)[X] += \operatorname{Hess} f(p) + \sum_{i=1}^m μ_i \operatorname{Hess} g_i(p)[X] + \sum_{j=1}^n λ_j \operatorname{Hess} h_j(p)[X]\]

Fields

  • co::CO, μ::T, λ::T as mentioned, where T represents a vector type.

Constructor

LagrangianHessian(co, μ, λ)

Create a functor for the Lagrangian with fixed dual variables.

Example

When you directly want to evaluate the Hessian of the Lagrangian $\operatorname{Hess}_p \mathcal L$ you can also call LagrangianHessian(co, μ, λ)(M, p, X) or LagrangianHessian(co, μ, λ)(M, Y, p, X) for the in-place variant.

source

Access functions

Manopt.get_equality_constraintFunction
get_equality_constraint(amp::AbstractManoptProblem, p, j=:)
+get_equality_constraint(M::AbstractManifold, objective, p, j=:)

Evaluate equality constraints of a ConstrainedManifoldObjective objective at point p and indices j (by default : which corresponds to all indices).

source
Manopt.get_inequality_constraintFunction
get_inequality_constraint(amp::AbstractManoptProblem, p, j=:)
+get_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())

Evaluate inequality constraints of a ConstrainedManifoldObjective objective at point p and indices j (by default : which corresponds to all indices).

source
Manopt.get_grad_equality_constraintFunction
get_grad_equality_constraint(amp::AbstractManoptProblem, p, j)
 get_grad_equality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())
 get_grad_equality_constraint!(amp::AbstractManoptProblem, X, p, j)
-get_grad_equality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())

Evaluate the gradient or gradients of the equality constraint $(\operatorname{grad} h(p))_j$ or $\operatorname{grad} h_j(p)$,

See also the ConstrainedManoptProblem to specify the range of the gradient.

source
Manopt.get_grad_inequality_constraintFunction
get_grad_inequality_constraint(amp::AbstractManoptProblem, p, j=:)
+get_grad_equality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())

Evaluate the gradient or gradients of the equality constraint $(\operatorname{grad} h(p))_j$ or $\operatorname{grad} h_j(p)$,

See also the ConstrainedManoptProblem to specify the range of the gradient.

source
Manopt.get_grad_inequality_constraintFunction
get_grad_inequality_constraint(amp::AbstractManoptProblem, p, j=:)
 get_grad_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())
 get_grad_inequality_constraint!(amp::AbstractManoptProblem, X, p, j=:)
-get_grad_inequality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())

Evaluate the gradient or gradients of the inequality constraint $(\operatorname{grad} g(p))_j$ or $\operatorname{grad} g_j(p)$,

See also the ConstrainedManoptProblem to specify the range of the gradient.

source
Manopt.get_hess_equality_constraintFunction
get_hess_equality_constraint(amp::AbstractManoptProblem, p, j=:)
+get_grad_inequality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())

Evaluate the gradient or gradients of the inequality constraint $(\operatorname{grad} g(p))_j$ or $\operatorname{grad} g_j(p)$,

See also the ConstrainedManoptProblem to specify the range of the gradient.

source
Manopt.get_hess_equality_constraintFunction
get_hess_equality_constraint(amp::AbstractManoptProblem, p, j=:)
 get_hess_equality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())
 get_hess_equality_constraint!(amp::AbstractManoptProblem, X, p, j=:)
-get_hess_equality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())

Evaluate the Hessian or Hessians of the equality constraint $(\operatorname{Hess} h(p))_j$ or $\operatorname{Hess} h_j(p)$,

See also the ConstrainedManoptProblem to specify the range of the Hessian.

source
Manopt.get_hess_inequality_constraintFunction
get_hess_inequality_constraint(amp::AbstractManoptProblem, p, X, j=:)
+get_hess_equality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())

Evaluate the Hessian or Hessians of the equality constraint $(\operatorname{Hess} h(p))_j$ or $\operatorname{Hess} h_j(p)$,

See also the ConstrainedManoptProblem to specify the range of the Hessian.

source
Manopt.get_hess_inequality_constraintFunction
get_hess_inequality_constraint(amp::AbstractManoptProblem, p, X, j=:)
 get_hess_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())
 get_hess_inequality_constraint!(amp::AbstractManoptProblem, Y, p, j=:)
-get_hess_inequality_constraint!(M::AbstractManifold, Y, co::ConstrainedManifoldObjective, p, X, j=:, range=NestedPowerRepresentation())

Evaluate the Hessian or Hessians of the inequality constraint $(\operatorname{Hess} g(p)[X])_j$ or $\operatorname{Hess} g_j(p)[X]$,

See also the ConstrainedManoptProblem to specify the range of the Hessian.

source
Manopt.is_feasibleFunction
is_feasible(M::AbstractManifold, cmo::ConstrainedManifoldObjective, p, kwargs...)

Evaluate whether a boint p on M is feasible with respect to the ConstrainedManifoldObjective cmo. That is for the provided inequality constaints $g: \mathcal M → ℝ^m$ and equality constaints $h: \mathcal M \to ℝ^m$ from within cmo, the point $p ∈ \mathcal M$ is feasible if

\[g_i(p) ≤ 0, \text{ for all } i=1,…,m\quad\text{ and }\quad h_j(p) = 0, \text{ for all } j=1,…,n.\]

Keyword arguments

  • check_point::Bool=true: whether to also verify that `p∈\mathcal M holds, using is_point
  • error::Symbol=:none: if the point is not feasible, this symbol determines how to report the error.
    • :error: throws an error
    • :info: displays the error message as an @info
    • :none: (default) the function just returns true/false
    • :warn: displays the error message as a @warning.

The keyword error= and all other kwargs... are passed on to is_point if the point is verfied (see check_point).

All other keywords are passed on to is_poi

source

Internal functions

Manopt.get_feasibility_statusFunction
get_feasibility_status(
+get_hess_inequality_constraint!(M::AbstractManifold, Y, co::ConstrainedManifoldObjective, p, X, j=:, range=NestedPowerRepresentation())

Evaluate the Hessian or Hessians of the inequality constraint $(\operatorname{Hess} g(p)[X])_j$ or $\operatorname{Hess} g_j(p)[X]$,

See also the ConstrainedManoptProblem to specify the range of the Hessian.

source
Manopt.is_feasibleFunction
is_feasible(M::AbstractManifold, cmo::ConstrainedManifoldObjective, p, kwargs...)

Evaluate whether a boint p on M is feasible with respect to the ConstrainedManifoldObjective cmo. That is for the provided inequality constaints $g: \mathcal M → ℝ^m$ and equality constaints $h: \mathcal M \to ℝ^m$ from within cmo, the point $p ∈ \mathcal M$ is feasible if

\[g_i(p) ≤ 0, \text{ for all } i=1,…,m\quad\text{ and }\quad h_j(p) = 0, \text{ for all } j=1,…,n.\]

Keyword arguments

  • check_point::Bool=true: whether to also verify that `p∈\mathcal M holds, using is_point
  • error::Symbol=:none: if the point is not feasible, this symbol determines how to report the error.
    • :error: throws an error
    • :info: displays the error message as an @info
    • :none: (default) the function just returns true/false
    • :warn: displays the error message as a @warning.

The keyword error= and all other kwargs... are passed on to is_point if the point is verfied (see check_point).

All other keywords are passed on to is_poi

source

Internal functions

Manopt.get_feasibility_statusFunction
get_feasibility_status(
     M::AbstractManifold,
     cmo::ConstrainedManifoldObjective,
     g = get_inequality_constraints(M, cmo, p),
     h = get_equality_constraints(M, cmo, p),
-)

Generate a message about the feasibiliy of p with respect to the ConstrainedManifoldObjective. You can also provide the evaluated vectors for the values of g and h as keyword arguments, in case you had them evaluated before.

source

Vectorial objectives

Manopt.AbstractVectorFunctionType
AbstractVectorFunction{E, FT} <: Function

Represent an abstract vectorial function $f:\mathcal M → ℝ^n$ with an AbstractEvaluationType E and an AbstractVectorialType to specify the format $f$ is implemented as.

Representations of $f$

There are three different representations of $f$, which might be beneficial in one or the other situation:

For the ComponentVectorialType imagine that $f$ could also be written using its component functions,

\[f(p) = \bigl( f_1(p), f_2(p), \ldots, f_n(p) \bigr)^{\mathrm{T}}\]

In this representation f is given as a vector [f1(M,p), f2(M,p), ..., fn(M,p)] of its component functions. An advantage is that the single components can be evaluated and from this representation one even can directly read of the number n. A disadvantage might be, that one has to implement a lot of individual (component) functions.

For the FunctionVectorialType $f$ is implemented as a single function f(M, p), that returns an AbstractArray. And advantage here is, that this is a single function. A disadvantage might be, that if this is expensive even to compute a single component, all of f has to be evaluated

For the ComponentVectorialType of f, each of the component functions is a (classical) objective.

source
Manopt.VectorGradientFunctionType
VectorGradientFunction{E, FT, JT, F, J, I} <: AbstractVectorGradientFunction{E, FT, JT}

Represent a function $f:\mathcal M → ℝ^n$ including it first derivative, either as a vector of gradients of a Jacobian

And hence has a gradient $\oepratorname{grad} f_i(p) ∈ T_p\mathcal M$. Putting these gradients into a vector the same way as the functions, yields a ComponentVectorialType

\[\operatorname{grad} f(p) = \Bigl( \operatorname{grad} f_1(p), \operatorname{grad} f_2(p), …, \operatorname{grad} f_n(p) \Bigr)^{\mathrm{T}} +)

Generate a message about the feasibiliy of p with respect to the ConstrainedManifoldObjective. You can also provide the evaluated vectors for the values of g and h as keyword arguments, in case you had them evaluated before.

source

Vectorial objectives

Manopt.AbstractVectorFunctionType
AbstractVectorFunction{E, FT} <: Function

Represent an abstract vectorial function $f:\mathcal M → ℝ^n$ with an AbstractEvaluationType E and an AbstractVectorialType to specify the format $f$ is implemented as.

Representations of $f$

There are three different representations of $f$, which might be beneficial in one or the other situation:

For the ComponentVectorialType imagine that $f$ could also be written using its component functions,

\[f(p) = \bigl( f_1(p), f_2(p), \ldots, f_n(p) \bigr)^{\mathrm{T}}\]

In this representation f is given as a vector [f1(M,p), f2(M,p), ..., fn(M,p)] of its component functions. An advantage is that the single components can be evaluated and from this representation one even can directly read of the number n. A disadvantage might be, that one has to implement a lot of individual (component) functions.

For the FunctionVectorialType $f$ is implemented as a single function f(M, p), that returns an AbstractArray. And advantage here is, that this is a single function. A disadvantage might be, that if this is expensive even to compute a single component, all of f has to be evaluated

For the ComponentVectorialType of f, each of the component functions is a (classical) objective.

source
Manopt.VectorGradientFunctionType
VectorGradientFunction{E, FT, JT, F, J, I} <: AbstractVectorGradientFunction{E, FT, JT}

Represent a function $f:\mathcal M → ℝ^n$ including it first derivative, either as a vector of gradients of a Jacobian

And hence has a gradient $\oepratorname{grad} f_i(p) ∈ T_p\mathcal M$. Putting these gradients into a vector the same way as the functions, yields a ComponentVectorialType

\[\operatorname{grad} f(p) = \Bigl( \operatorname{grad} f_1(p), \operatorname{grad} f_2(p), …, \operatorname{grad} f_n(p) \Bigr)^{\mathrm{T}} ∈ (T_p\mathcal M)^n\]

And advantage here is, that again the single components can be evaluated individually

Fields

  • value!!: the cost function $f$, which can take different formats
  • cost_type: indicating / string data for the type of f
  • jacobian!!: the Jacobian of $f$
  • jacobian_type: indicating / storing data for the type of $J_f$
  • parameters: the number n from, the size of the vector $f$ returns.

Constructor

VectorGradientFunction(f, Jf, range_dimension;
     evaluation::AbstractEvaluationType=AllocatingEvaluation(),
     function_type::AbstractVectorialType=FunctionVectorialType(),
     jacobian_type::AbstractVectorialType=FunctionVectorialType(),
-)

Create a VectorGradientFunction of f and its Jacobian (vector of gradients) Jf, where f maps into the Euclidean space of dimension range_dimension. Their types are specified by the function_type, and jacobian_type, respectively. The Jacobian can further be given as an allocating variant or an in-place variant, specified by the evaluation= keyword.

source
Manopt.VectorHessianFunctionType
VectorHessianFunction{E, FT, JT, HT, F, J, H, I} <: AbstractVectorGradientFunction{E, FT, JT}

Represent a function $f:\mathcal M → ℝ^n$ including it first derivative, either as a vector of gradients of a Jacobian, and the Hessian, as a vector of Hessians of the component functions.

Both the Jacobian and the Hessian can map into either a sequence of tangent spaces or a single tangent space of the power manifold of lenth n.

Fields

  • value!!: the cost function $f$, which can take different formats
  • cost_type: indicating / string data for the type of f
  • jacobian!!: the Jacobian of $f$
  • jacobian_type: indicating / storing data for the type of $J_f$
  • hessians!!: the Hessians of $f$ (in a component wise sense)
  • hessian_type: indicating / storing data for the type of $H_f$
  • parameters: the number n from, the size of the vector $f$ returns.

Constructor

VectorGradientFunction(f, Jf, Hess_f, range_dimension;
+)

Create a VectorGradientFunction of f and its Jacobian (vector of gradients) Jf, where f maps into the Euclidean space of dimension range_dimension. Their types are specified by the function_type, and jacobian_type, respectively. The Jacobian can further be given as an allocating variant or an in-place variant, specified by the evaluation= keyword.

source
Manopt.VectorHessianFunctionType
VectorHessianFunction{E, FT, JT, HT, F, J, H, I} <: AbstractVectorGradientFunction{E, FT, JT}

Represent a function $f:\mathcal M → ℝ^n$ including it first derivative, either as a vector of gradients of a Jacobian, and the Hessian, as a vector of Hessians of the component functions.

Both the Jacobian and the Hessian can map into either a sequence of tangent spaces or a single tangent space of the power manifold of lenth n.

Fields

  • value!!: the cost function $f$, which can take different formats
  • cost_type: indicating / string data for the type of f
  • jacobian!!: the Jacobian of $f$
  • jacobian_type: indicating / storing data for the type of $J_f$
  • hessians!!: the Hessians of $f$ (in a component wise sense)
  • hessian_type: indicating / storing data for the type of $H_f$
  • parameters: the number n from, the size of the vector $f$ returns.

Constructor

VectorGradientFunction(f, Jf, Hess_f, range_dimension;
     evaluation::AbstractEvaluationType=AllocatingEvaluation(),
     function_type::AbstractVectorialType=FunctionVectorialType(),
     jacobian_type::AbstractVectorialType=FunctionVectorialType(),
     hessian_type::AbstractVectorialType=FunctionVectorialType(),
-)

Create a VectorGradientFunction of f and its Jacobian (vector of gradients) Jf and (vector of) Hessians, where f maps into the Euclidean space of dimension range_dimension. Their types are specified by the function_type, and jacobian_type, and hessian_type, respectively. The Jacobian and Hessian can further be given as an allocating variant or an inplace-variant, specified by the evaluation= keyword.

source
Manopt.AbstractVectorialTypeType
AbstractVectorialType

An abstract type for different representations of a vectorial function $f: \mathcal M → \mathbb R^m$ and its (component-wise) gradient/Jacobian

source
Manopt.CoordinateVectorialTypeType
CoordinateVectorialType{B<:AbstractBasis} <: AbstractVectorialType

A type to indicate that gradient of the constraints is implemented as a Jacobian matrix with respect to a certain basis, that is if the constraints are given as $g: \mathcal M → ℝ^m$ with respect to a basis $\mathcal B$ of $T_p\mathcal M$, at $p∈ \mathcal M$ This can be written as $J_g(p) = (c_1^{\mathrm{T}},…,c_m^{\mathrm{T}})^{\mathrm{T}} \in ℝ^{m,d}$, that is, every row $c_i$ of this matrix is a set of coefficients such that get_coefficients(M, p, c, B) is the tangent vector $\oepratorname{grad} g_i(p)$

for example $g_i(p) ∈ ℝ^m$ or $\operatorname{grad} g_i(p) ∈ T_p\mathcal M$, $i=1,…,m$.

Fields

source
Manopt.ComponentVectorialTypeType
ComponentVectorialType <: AbstractVectorialType

A type to indicate that constraints are implemented as component functions, for example $g_i(p) ∈ ℝ^m$ or $\operatorname{grad} g_i(p) ∈ T_p\mathcal M$, $i=1,…,m$.

source
Manopt.FunctionVectorialTypeType
FunctionVectorialType <: AbstractVectorialType

A type to indicate that constraints are implemented one whole functions, for example $g(p) ∈ ℝ^m$ or $\operatorname{grad} g(p) ∈ (T_p\mathcal M)^m$.

source

Access functions

Manopt.get_valueFunction
get_value(M::AbstractManifold, vgf::AbstractVectorFunction, p[, i=:])

Evaluate the vector function VectorGradientFunction vgf at p. The range can be used to specify a potential range, but is currently only present for consistency.

The i can be a linear index, you can provide

  • a single integer
  • a UnitRange to specify a range to be returned like 1:3
  • a BitVector specifying a selection
  • a AbstractVector{<:Integer} to specify indices
  • : to return the vector of all gradients, which is also the default
source
Base.lengthMethod
length(vgf::AbstractVectorFunction)

Return the length of the vector the function $f: \mathcal M → ℝ^n$ maps into, that is the number n.

source

Internal functions

Manopt._to_iterable_indicesFunction
_to_iterable_indices(A::AbstractVector, i)

Convert index i (integer, colon, vector of indices, etc.) for array A into an iterable structure of indices.

source

Subproblem objective

This objective can be use when the objective of a sub problem solver still needs access to the (outer/main) objective.

Manopt.AbstractManifoldSubObjectiveType
AbstractManifoldSubObjective{O<:AbstractManifoldObjective} <: AbstractManifoldObjective

An abstract type for objectives of sub problems within a solver but still store the original objective internally to generate generic objectives for sub solvers.

source

Access functions

Manopt.get_objective_costFunction
get_objective_cost(M, amso::AbstractManifoldSubObjective, p)

Evaluate the cost of the (original) objective stored within the sub objective.

source
Manopt.get_objective_gradientFunction
X = get_objective_gradient(M, amso::AbstractManifoldSubObjective, p)
-get_objective_gradient!(M, X, amso::AbstractManifoldSubObjective, p)

Evaluate the gradient of the (original) objective stored within the sub objective amso.

source
Manopt.get_objective_hessianFunction
Y = get_objective_Hessian(M, amso::AbstractManifoldSubObjective, p, X)
-get_objective_Hessian!(M, Y, amso::AbstractManifoldSubObjective, p, X)

Evaluate the Hessian of the (original) objective stored within the sub objective amso.

source
Manopt.get_objective_preconditionerFunction
Y = get_objective_preconditioner(M, amso::AbstractManifoldSubObjective, p, X)
-get_objective_preconditioner(M, Y, amso::AbstractManifoldSubObjective, p, X)

Evaluate the Hessian of the (original) objective stored within the sub objective amso.

source
+)

Create a VectorGradientFunction of f and its Jacobian (vector of gradients) Jf and (vector of) Hessians, where f maps into the Euclidean space of dimension range_dimension. Their types are specified by the function_type, and jacobian_type, and hessian_type, respectively. The Jacobian and Hessian can further be given as an allocating variant or an inplace-variant, specified by the evaluation= keyword.

source
Manopt.AbstractVectorialTypeType
AbstractVectorialType

An abstract type for different representations of a vectorial function $f: \mathcal M → \mathbb R^m$ and its (component-wise) gradient/Jacobian

source
Manopt.CoordinateVectorialTypeType
CoordinateVectorialType{B<:AbstractBasis} <: AbstractVectorialType

A type to indicate that gradient of the constraints is implemented as a Jacobian matrix with respect to a certain basis, that is if the constraints are given as $g: \mathcal M → ℝ^m$ with respect to a basis $\mathcal B$ of $T_p\mathcal M$, at $p∈ \mathcal M$ This can be written as $J_g(p) = (c_1^{\mathrm{T}},…,c_m^{\mathrm{T}})^{\mathrm{T}} \in ℝ^{m,d}$, that is, every row $c_i$ of this matrix is a set of coefficients such that get_coefficients(M, p, c, B) is the tangent vector $\oepratorname{grad} g_i(p)$

for example $g_i(p) ∈ ℝ^m$ or $\operatorname{grad} g_i(p) ∈ T_p\mathcal M$, $i=1,…,m$.

Fields

source
Manopt.ComponentVectorialTypeType
ComponentVectorialType <: AbstractVectorialType

A type to indicate that constraints are implemented as component functions, for example $g_i(p) ∈ ℝ^m$ or $\operatorname{grad} g_i(p) ∈ T_p\mathcal M$, $i=1,…,m$.

source
Manopt.FunctionVectorialTypeType
FunctionVectorialType <: AbstractVectorialType

A type to indicate that constraints are implemented one whole functions, for example $g(p) ∈ ℝ^m$ or $\operatorname{grad} g(p) ∈ (T_p\mathcal M)^m$.

source

Access functions

Manopt.get_valueFunction
get_value(M::AbstractManifold, vgf::AbstractVectorFunction, p[, i=:])

Evaluate the vector function VectorGradientFunction vgf at p. The range can be used to specify a potential range, but is currently only present for consistency.

The i can be a linear index, you can provide

  • a single integer
  • a UnitRange to specify a range to be returned like 1:3
  • a BitVector specifying a selection
  • a AbstractVector{<:Integer} to specify indices
  • : to return the vector of all gradients, which is also the default
source
Manopt.get_value_functionFunction
get_value_function(vgf::VectorGradientFunction, recursive=false)

return the internally stored function computing get_value.

source
Base.lengthMethod
length(vgf::AbstractVectorFunction)

Return the length of the vector the function $f: \mathcal M → ℝ^n$ maps into, that is the number n.

source

Internal functions

Manopt._to_iterable_indicesFunction
_to_iterable_indices(A::AbstractVector, i)

Convert index i (integer, colon, vector of indices, etc.) for array A into an iterable structure of indices.

source

Subproblem objective

This objective can be use when the objective of a sub problem solver still needs access to the (outer/main) objective.

Manopt.AbstractManifoldSubObjectiveType
AbstractManifoldSubObjective{O<:AbstractManifoldObjective} <: AbstractManifoldObjective

An abstract type for objectives of sub problems within a solver but still store the original objective internally to generate generic objectives for sub solvers.

source

Access functions

Manopt.get_objective_costFunction
get_objective_cost(M, amso::AbstractManifoldSubObjective, p)

Evaluate the cost of the (original) objective stored within the sub objective.

source
Manopt.get_objective_gradientFunction
X = get_objective_gradient(M, amso::AbstractManifoldSubObjective, p)
+get_objective_gradient!(M, X, amso::AbstractManifoldSubObjective, p)

Evaluate the gradient of the (original) objective stored within the sub objective amso.

source
Manopt.get_objective_hessianFunction
Y = get_objective_Hessian(M, amso::AbstractManifoldSubObjective, p, X)
+get_objective_Hessian!(M, Y, amso::AbstractManifoldSubObjective, p, X)

Evaluate the Hessian of the (original) objective stored within the sub objective amso.

source
Manopt.get_objective_preconditionerFunction
Y = get_objective_preconditioner(M, amso::AbstractManifoldSubObjective, p, X)
+get_objective_preconditioner(M, Y, amso::AbstractManifoldSubObjective, p, X)

Evaluate the Hessian of the (original) objective stored within the sub objective amso.

source
diff --git a/dev/plans/problem/index.html b/dev/plans/problem/index.html index 111756672a..44b8043dfe 100644 --- a/dev/plans/problem/index.html +++ b/dev/plans/problem/index.html @@ -1,4 +1,4 @@ -Problem · Manopt.jl

A Manopt problem

A problem describes all static data of an optimisation task and has as a super type

Manopt.get_objectiveFunction
get_objective(o::AbstractManifoldObjective, recursive=true)

return the (one step) undecorated AbstractManifoldObjective of the (possibly) decorated o. As long as your decorated objective stores the objective within o.objective and the dispatch_objective_decorator is set to Val{true}, the internal state are extracted automatically.

By default the objective that is stored within a decorated objective is assumed to be at o.objective. Overwrite _get_objective(o, ::Val{true}, recursive) to change this behaviour for your objectiveo` for both the recursive and the direct case.

If recursive is set to false, only the most outer decorator is taken away instead of all.

source
get_objective(mp::AbstractManoptProblem, recursive=false)

return the objective AbstractManifoldObjective stored within an AbstractManoptProblem. If recursive is set to true, it additionally unwraps all decorators of the objective

source
get_objective(amso::AbstractManifoldSubObjective)

Return the (original) objective stored the sub objective is build on.

source

Usually, such a problem is determined by the manifold or domain of the optimisation and the objective with all its properties used within an algorithm, see The Objective. For that one can just use

For the constraint optimisation, there are different possibilities to represent the gradients of the constraints. This can be done with a

ConstraintProblem

The primal dual-based solvers (Chambolle-Pock and the PD Semi-smooth Newton), both need two manifolds as their domains, hence there also exists a

Manopt.TwoManifoldProblemType
TwoManifoldProblem{
+Problem · Manopt.jl

A Manopt problem

A problem describes all static data of an optimisation task and has as a super type

Manopt.get_objectiveFunction
get_objective(o::AbstractManifoldObjective, recursive=true)

return the (one step) undecorated AbstractManifoldObjective of the (possibly) decorated o. As long as your decorated objective stores the objective within o.objective and the dispatch_objective_decorator is set to Val{true}, the internal state are extracted automatically.

By default the objective that is stored within a decorated objective is assumed to be at o.objective. Overwrite _get_objective(o, ::Val{true}, recursive) to change this behaviour for your objectiveo` for both the recursive and the direct case.

If recursive is set to false, only the most outer decorator is taken away instead of all.

source
get_objective(mp::AbstractManoptProblem, recursive=false)

return the objective AbstractManifoldObjective stored within an AbstractManoptProblem. If recursive is set to true, it additionally unwraps all decorators of the objective

source
get_objective(amso::AbstractManifoldSubObjective)

Return the (original) objective stored the sub objective is build on.

source

Usually, such a problem is determined by the manifold or domain of the optimisation and the objective with all its properties used within an algorithm, see The Objective. For that one can just use

For the constraint optimisation, there are different possibilities to represent the gradients of the constraints. This can be done with a

ConstraintProblem

The primal dual-based solvers (Chambolle-Pock and the PD Semi-smooth Newton), both need two manifolds as their domains, hence there also exists a

Manopt.TwoManifoldProblemType
TwoManifoldProblem{
     MT<:AbstractManifold,NT<:AbstractManifold,O<:AbstractManifoldObjective
-} <: AbstractManoptProblem{MT}

An abstract type for primal-dual-based problems.

source

From the two ingredients here, you can find more information about

+} <: AbstractManoptProblem{MT}

An abstract type for primal-dual-based problems.

source

From the two ingredients here, you can find more information about

diff --git a/dev/plans/record/index.html b/dev/plans/record/index.html index 1320d55258..012e0d4232 100644 --- a/dev/plans/record/index.html +++ b/dev/plans/record/index.html @@ -1,14 +1,14 @@ -Recording values · Manopt.jl

Record values

To record values during the iterations of a solver run, there are in general two possibilities. On the one hand, the high-level interfaces provide a record= keyword, that accepts several different inputs. For more details see How to record.

Record Actions & the solver state decorator

Manopt.RecordActionType
RecordAction

A RecordAction is a small functor to record values. The usual call is given by

(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, k) -> s

that performs the record for the current problem and solver combination, and where k is the current iteration.

By convention i=0 is interpreted as "For Initialization only," so only initialize internal values, but not trigger any record, that the record is called from within stop_solver! which returns true afterwards.

Any negative value is interpreted as a “reset”, and should hence delete all stored recordings, for example when reusing a RecordAction. The start of a solver calls the :Iteration and :Stop dictionary entries with -1, to reset those recordings.

By default any RecordAction is assumed to record its values in a field recorded_values, an Vector of recorded values. See get_record(ra).

source
Manopt.RecordChangeType
RecordChange <: RecordAction

debug for the amount of change of the iterate (see get_iterate(s) of the AbstractManoptSolverState) during the last iteration.

Fields

Constructor

RecordChange(M=DefaultManifold();
+Recording values · Manopt.jl

Record values

To record values during the iterations of a solver run, there are in general two possibilities. On the one hand, the high-level interfaces provide a record= keyword, that accepts several different inputs. For more details see How to record.

Record Actions & the solver state decorator

Manopt.RecordActionType
RecordAction

A RecordAction is a small functor to record values. The usual call is given by

(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, k) -> s

that performs the record for the current problem and solver combination, and where k is the current iteration.

By convention i=0 is interpreted as "For Initialization only," so only initialize internal values, but not trigger any record, that the record is called from within stop_solver! which returns true afterwards.

Any negative value is interpreted as a “reset”, and should hence delete all stored recordings, for example when reusing a RecordAction. The start of a solver calls the :Iteration and :Stop dictionary entries with -1, to reset those recordings.

By default any RecordAction is assumed to record its values in a field recorded_values, an Vector of recorded values. See get_record(ra).

source
Manopt.RecordChangeType
RecordChange <: RecordAction

debug for the amount of change of the iterate (see get_iterate(s) of the AbstractManoptSolverState) during the last iteration.

Fields

Constructor

RecordChange(M=DefaultManifold();
     inverse_retraction_method = default_inverse_retraction_method(M),
     storage                   = StoreStateAction(M; store_points=Tuple{:Iterate})
-)

with the previous fields as keywords. For the DefaultManifold only the field storage is used. Providing the actual manifold moves the default storage to the efficient point storage.

source
Manopt.RecordCostType
RecordCost <: RecordAction

Record the current cost function value, see get_cost.

Fields

  • recorded_values : to store the recorded values

Constructor

RecordCost()
source
Manopt.RecordEntryType
RecordEntry{T} <: RecordAction

record a certain fields entry of type {T} during the iterates

Fields

Constructor

RecordEntry(::T, f::Symbol)
-RecordEntry(T::DataType, f::Symbol)

Initialize the record action to record the state field f, and initialize the recorded_values to be a vector of element type T.

Examples

  • RecordEntry(rand(M), :q) to record the points from M stored in some states s.q
  • RecordEntry(SVDMPoint, :p) to record the field s.p which takes values of type SVDMPoint.
source
Manopt.RecordEntryChangeType
RecordEntryChange{T} <: RecordAction

record a certain entries change during iterates

Additional fields

  • recorded_values : the recorded Iterates
  • field : Symbol the field can be accessed with within AbstractManoptSolverState
  • distance : function (p,o,x1,x2) to compute the change/distance between two values of the entry
  • storage : a StoreStateAction to store (at least) getproperty(o, d.field)

Constructor

RecordEntryChange(f::Symbol, d, a::StoreStateAction=StoreStateAction([f]))
source
Manopt.RecordEveryType
RecordEvery <: RecordAction

record only every $k$th iteration. Otherwise (optionally, but activated by default) just update internal tracking values.

This method does not perform any record itself but relies on it's children's methods

source
Manopt.RecordGroupType
RecordGroup <: RecordAction

group a set of RecordActions into one action, where the internal RecordActions act independently, but the results can be collected in a grouped fashion, a tuple per calls of this group. The entries can be later addressed either by index or semantic Symbols

Constructors

RecordGroup(g::Array{<:RecordAction, 1})

construct a group consisting of an Array of RecordActions g,

RecordGroup(g, symbols)

Examples

g1 = RecordGroup([RecordIteration(), RecordCost()])

A RecordGroup to record the current iteration and the cost. The cost can then be accessed using get_record(r,2) or r[2].

g2 = RecordGroup([RecordIteration(), RecordCost()], Dict(:Cost => 2))

A RecordGroup to record the current iteration and the cost, which can then be accessed using get_record(:Cost) or r[:Cost].

g3 = RecordGroup([RecordIteration(), RecordCost() => :Cost])

A RecordGroup identical to the previous constructor, just a little easier to use. To access all recordings of the second entry of this last g3 you can do either g4[2] or g[:Cost], the first one can only be accessed by g4[1], since no symbol was given here.

source
Manopt.RecordIterateType
RecordIterate <: RecordAction

record the iterate

Constructors

RecordIterate(x0)

initialize the iterate record array to the type of x0, which indicates the kind of iterate

RecordIterate(P)

initialize the iterate record array to the data type T.

source
Manopt.RecordSolverStateType
RecordSolverState <: AbstractManoptSolverState

append to any AbstractManoptSolverState the decorator with record capability, Internally a dictionary is kept that stores a RecordAction for several concurrent modes using a Symbol as reference. The default mode is :Iteration, which is used to store information that is recorded during the iterations. RecordActions might be added to :Start or :Stop to record values at the beginning or for the stopping time point, respectively

The original options can still be accessed using the get_state function.

Fields

  • options the options that are extended by debug information
  • recordDictionary a Dict{Symbol,RecordAction} to keep track of all different recorded values

Constructors

RecordSolverState(o,dR)

construct record decorated AbstractManoptSolverState, where dR can be

  • a RecordAction, then it is stored within the dictionary at :Iteration
  • an Array of RecordActions, then it is stored as a recordDictionary(@ref).
  • a Dict{Symbol,RecordAction}.
source
Manopt.RecordSubsolverType
RecordSubsolver <: RecordAction

Record the current subsolvers recording, by calling get_record on the sub state with

Fields

  • records: an array to store the recorded values
  • symbols: arguments for get_record. Defaults to just one symbol :Iteration, but could be set to also record the :Stop action.

Constructor

RecordSubsolver(; record=[:Iteration,], record_type=eltype([]))
source
Manopt.RecordTimeType
RecordTime <: RecordAction

record the time elapsed during the current iteration.

The three possible modes are

  • :cumulative record times without resetting the timer
  • :iterative record times with resetting the timer
  • :total record a time only at the end of an algorithm (see stop_solver!)

The default is :cumulative, and any non-listed symbol default to using this mode.

Constructor

RecordTime(; mode::Symbol=:cumulative)
source
Manopt.RecordWhenActiveType
RecordWhenActive <: RecordAction

record action that only records if the active boolean is set to true. This can be set from outside and is for example triggered by |RecordEvery](@ref) on recordings of the subsolver. While this is for subsolvers maybe not completely necessary, recording values that are never accessible, is not that useful.

Fields

  • active: a boolean that can (de-)activated from outside to turn on/off debug
  • always_update: whether or not to call the inner debugs with nonpositive iterates (init/reset)

Constructor

RecordWhenActive(r::RecordAction, active=true, always_update=true)
source

Access functions

Base.getindexMethod
getindex(r::RecordGroup, s::Symbol)
+)

with the previous fields as keywords. For the DefaultManifold only the field storage is used. Providing the actual manifold moves the default storage to the efficient point storage.

source
Manopt.RecordCostType
RecordCost <: RecordAction

Record the current cost function value, see get_cost.

Fields

  • recorded_values : to store the recorded values

Constructor

RecordCost()
source
Manopt.RecordEntryType
RecordEntry{T} <: RecordAction

record a certain fields entry of type {T} during the iterates

Fields

Constructor

RecordEntry(::T, f::Symbol)
+RecordEntry(T::DataType, f::Symbol)

Initialize the record action to record the state field f, and initialize the recorded_values to be a vector of element type T.

Examples

  • RecordEntry(rand(M), :q) to record the points from M stored in some states s.q
  • RecordEntry(SVDMPoint, :p) to record the field s.p which takes values of type SVDMPoint.
source
Manopt.RecordEntryChangeType
RecordEntryChange{T} <: RecordAction

record a certain entries change during iterates

Additional fields

  • recorded_values : the recorded Iterates
  • field : Symbol the field can be accessed with within AbstractManoptSolverState
  • distance : function (p,o,x1,x2) to compute the change/distance between two values of the entry
  • storage : a StoreStateAction to store (at least) getproperty(o, d.field)

Constructor

RecordEntryChange(f::Symbol, d, a::StoreStateAction=StoreStateAction([f]))
source
Manopt.RecordEveryType
RecordEvery <: RecordAction

record only every $k$th iteration. Otherwise (optionally, but activated by default) just update internal tracking values.

This method does not perform any record itself but relies on it's children's methods

source
Manopt.RecordGroupType
RecordGroup <: RecordAction

group a set of RecordActions into one action, where the internal RecordActions act independently, but the results can be collected in a grouped fashion, a tuple per calls of this group. The entries can be later addressed either by index or semantic Symbols

Constructors

RecordGroup(g::Array{<:RecordAction, 1})

construct a group consisting of an Array of RecordActions g,

RecordGroup(g, symbols)

Examples

g1 = RecordGroup([RecordIteration(), RecordCost()])

A RecordGroup to record the current iteration and the cost. The cost can then be accessed using get_record(r,2) or r[2].

g2 = RecordGroup([RecordIteration(), RecordCost()], Dict(:Cost => 2))

A RecordGroup to record the current iteration and the cost, which can then be accessed using get_record(:Cost) or r[:Cost].

g3 = RecordGroup([RecordIteration(), RecordCost() => :Cost])

A RecordGroup identical to the previous constructor, just a little easier to use. To access all recordings of the second entry of this last g3 you can do either g4[2] or g[:Cost], the first one can only be accessed by g4[1], since no symbol was given here.

source
Manopt.RecordIterateType
RecordIterate <: RecordAction

record the iterate

Constructors

RecordIterate(x0)

initialize the iterate record array to the type of x0, which indicates the kind of iterate

RecordIterate(P)

initialize the iterate record array to the data type T.

source
Manopt.RecordSolverStateType
RecordSolverState <: AbstractManoptSolverState

append to any AbstractManoptSolverState the decorator with record capability, Internally a dictionary is kept that stores a RecordAction for several concurrent modes using a Symbol as reference. The default mode is :Iteration, which is used to store information that is recorded during the iterations. RecordActions might be added to :Start or :Stop to record values at the beginning or for the stopping time point, respectively

The original options can still be accessed using the get_state function.

Fields

  • options the options that are extended by debug information
  • recordDictionary a Dict{Symbol,RecordAction} to keep track of all different recorded values

Constructors

RecordSolverState(o,dR)

construct record decorated AbstractManoptSolverState, where dR can be

  • a RecordAction, then it is stored within the dictionary at :Iteration
  • an Array of RecordActions, then it is stored as a recordDictionary(@ref).
  • a Dict{Symbol,RecordAction}.
source
Manopt.RecordSubsolverType
RecordSubsolver <: RecordAction

Record the current subsolvers recording, by calling get_record on the sub state with

Fields

  • records: an array to store the recorded values
  • symbols: arguments for get_record. Defaults to just one symbol :Iteration, but could be set to also record the :Stop action.

Constructor

RecordSubsolver(; record=[:Iteration,], record_type=eltype([]))
source
Manopt.RecordTimeType
RecordTime <: RecordAction

record the time elapsed during the current iteration.

The three possible modes are

  • :cumulative record times without resetting the timer
  • :iterative record times with resetting the timer
  • :total record a time only at the end of an algorithm (see stop_solver!)

The default is :cumulative, and any non-listed symbol default to using this mode.

Constructor

RecordTime(; mode::Symbol=:cumulative)
source
Manopt.RecordWhenActiveType
RecordWhenActive <: RecordAction

record action that only records if the active boolean is set to true. This can be set from outside and is for example triggered by |RecordEvery](@ref) on recordings of the subsolver. While this is for subsolvers maybe not completely necessary, recording values that are never accessible, is not that useful.

Fields

  • active: a boolean that can (de-)activated from outside to turn on/off debug
  • always_update: whether or not to call the inner debugs with nonpositive iterates (init/reset)

Constructor

RecordWhenActive(r::RecordAction, active=true, always_update=true)
source

Access functions

Base.getindexMethod
getindex(r::RecordGroup, s::Symbol)
 r[s]
 getindex(r::RecordGroup, sT::NTuple{N,Symbol})
 r[sT]
 getindex(r::RecordGroup, i)
-r[i]

return an array of recorded values with respect to the s, the symbols from the tuple sT or the index i. See get_record for details.

source
Base.getindexMethod
get_index(rs::RecordSolverState, s::Symbol)
+r[i]

return an array of recorded values with respect to the s, the symbols from the tuple sT or the index i. See get_record for details.

source
Base.getindexMethod
get_index(rs::RecordSolverState, s::Symbol)
 ro[s]

Get the recorded values for recorded type s, see get_record for details.

get_index(rs::RecordSolverState, s::Symbol, i...)
-ro[s, i...]

Access the recording type of type s and call its RecordAction with [i...].

source
Manopt.get_recordFunction
get_record(s::AbstractManoptSolverState, [,symbol=:Iteration])
-get_record(s::RecordSolverState, [,symbol=:Iteration])

return the recorded values from within the RecordSolverState s that where recorded with respect to the Symbol symbol as an Array. The default refers to any recordings during an :Iteration.

When called with arbitrary AbstractManoptSolverState, this method looks for the RecordSolverState decorator and calls get_record on the decorator.

source
Manopt.get_recordMethod
get_record(r::RecordGroup)

return an array of tuples, where each tuple is a recorded set per iteration or record call.

get_record(r::RecordGruop, k::Int)

return an array of values corresponding to the ith entry in this record group

get_record(r::RecordGruop, s::Symbol)

return an array of recorded values with respect to the s, see RecordGroup.

get_record(r::RecordGroup, s1::Symbol, s2::Symbol,...)

return an array of tuples, where each tuple is a recorded set corresponding to the symbols s1, s2,... per iteration / record call.

source

Internal factory functions

Manopt.RecordActionFactoryMethod
RecordActionFactory(s::AbstractManoptSolverState, a)

create a RecordAction where

  • a RecordAction is passed through
  • a [Symbol] creates
    • :Change to record the change of the iterates, see RecordChange
    • :Gradient to record the gradient, see RecordGradient
    • :GradientNorm to record the norm of the gradient, see [RecordGradientNorm`](@ref)
    • :Iterate to record the iterate
    • :Iteration to record the current iteration number
    • IterativeTime to record the time iteratively
    • :Cost to record the current cost function value
    • :Stepsize to record the current step size
    • :Time to record the total time taken after every iteration
    • :IterativeTime to record the times taken for each iteration.

and every other symbol is passed to RecordEntry, which results in recording the field of the state with the symbol indicating the field of the solver to record.

source
Manopt.RecordActionFactoryMethod
RecordActionFactory(s::AbstractManoptSolverState, t::Tuple{Symbol, T}) where {T}

create a RecordAction where

  • (:Subsolver, s) creates a RecordSubsolver with record= set to the second tuple entry

For other symbol the second entry is ignored and the symbol is used to generate a RecordEntry recording the field with the name symbol of s.

source
Manopt.RecordFactoryMethod
RecordFactory(s::AbstractManoptSolverState, a)

Generate a dictionary of RecordActions.

First all Symbols String, RecordActions and numbers are collected, excluding :Stop and :WhenActive. This collected vector is added to the :Iteration => [...] pair. :Stop is added as :StoppingCriterion to the :Stop => [...] pair. If any of these two pairs does not exist, it is pairs are created when adding the corresponding symbols

For each Pair of a Symbol and a Vector, the RecordGroupFactory is called for the Vector and the result is added to the debug dictionary's entry with said symbol. This is wrapped into the RecordWhenActive, when the :WhenActive symbol is present

Return value

A dictionary for the different entry points where debug can happen, each containing a RecordAction to call.

Note that upon the initialisation all dictionaries but the :StartAlgorithm one are called with an i=0 for reset.

source
Manopt.RecordGroupFactoryMethod
RecordGroupFactory(s::AbstractManoptSolverState, a)

Generate a [RecordGroup] of RecordActions. The following rules are used

  1. Any Symbol contained in a is passed to RecordActionFactory
  2. Any RecordAction is included as is.

Any Pair of a RecordAction and a symbol, that is in order RecordCost() => :A is handled, that the corresponding record action can later be accessed as g[:A], where gis the record group generated here.

If this results in more than one RecordAction a RecordGroup of these is build.

If any integers are present, the last of these is used to wrap the group in a RecordEvery(k).

If :WhenActive is present, the resulting Action is wrapped in RecordWhenActive, making it deactivatable by its parent solver.

source
Manopt.set_parameter!Method
set_parameter!(ams::RecordSolverState, ::Val{:Record}, args...)

Set certain values specified by args... into the elements of the recordDictionary

source

Further specific RecordActions can be found when specific types of AbstractManoptSolverState define them on their corresponding site.

Technical details

Manopt.initialize_solver!Method
initialize_solver!(ams::AbstractManoptProblem, rss::RecordSolverState)

Extend the initialization of the solver by a hook to run records that were added to the :Start entry.

source
Manopt.step_solver!Method
step_solver!(amp::AbstractManoptProblem, rss::RecordSolverState, k)

Extend the ith step of the solver by a hook to run records, that were added to the :Iteration entry.

source
Manopt.stop_solver!Method
stop_solver!(amp::AbstractManoptProblem, rss::RecordSolverStatek k)

Extend the call to the stopping criterion by a hook to run records, that were added to the :Stop entry.

source
+ro[s, i...]

Access the recording type of type s and call its RecordAction with [i...].

source
Manopt.get_recordFunction
get_record(s::AbstractManoptSolverState, [,symbol=:Iteration])
+get_record(s::RecordSolverState, [,symbol=:Iteration])

return the recorded values from within the RecordSolverState s that where recorded with respect to the Symbol symbol as an Array. The default refers to any recordings during an :Iteration.

When called with arbitrary AbstractManoptSolverState, this method looks for the RecordSolverState decorator and calls get_record on the decorator.

source
Manopt.get_recordMethod
get_record(r::RecordGroup)

return an array of tuples, where each tuple is a recorded set per iteration or record call.

get_record(r::RecordGruop, k::Int)

return an array of values corresponding to the ith entry in this record group

get_record(r::RecordGruop, s::Symbol)

return an array of recorded values with respect to the s, see RecordGroup.

get_record(r::RecordGroup, s1::Symbol, s2::Symbol,...)

return an array of tuples, where each tuple is a recorded set corresponding to the symbols s1, s2,... per iteration / record call.

source

Internal factory functions

Manopt.RecordActionFactoryMethod
RecordActionFactory(s::AbstractManoptSolverState, a)

create a RecordAction where

  • a RecordAction is passed through
  • a [Symbol] creates
    • :Change to record the change of the iterates, see RecordChange
    • :Gradient to record the gradient, see RecordGradient
    • :GradientNorm to record the norm of the gradient, see [RecordGradientNorm`](@ref)
    • :Iterate to record the iterate
    • :Iteration to record the current iteration number
    • IterativeTime to record the time iteratively
    • :Cost to record the current cost function value
    • :Stepsize to record the current step size
    • :Time to record the total time taken after every iteration
    • :IterativeTime to record the times taken for each iteration.

and every other symbol is passed to RecordEntry, which results in recording the field of the state with the symbol indicating the field of the solver to record.

source
Manopt.RecordActionFactoryMethod
RecordActionFactory(s::AbstractManoptSolverState, t::Tuple{Symbol, T}) where {T}

create a RecordAction where

  • (:Subsolver, s) creates a RecordSubsolver with record= set to the second tuple entry

For other symbol the second entry is ignored and the symbol is used to generate a RecordEntry recording the field with the name symbol of s.

source
Manopt.RecordFactoryMethod
RecordFactory(s::AbstractManoptSolverState, a)

Generate a dictionary of RecordActions.

First all Symbols String, RecordActions and numbers are collected, excluding :Stop and :WhenActive. This collected vector is added to the :Iteration => [...] pair. :Stop is added as :StoppingCriterion to the :Stop => [...] pair. If any of these two pairs does not exist, it is pairs are created when adding the corresponding symbols

For each Pair of a Symbol and a Vector, the RecordGroupFactory is called for the Vector and the result is added to the debug dictionary's entry with said symbol. This is wrapped into the RecordWhenActive, when the :WhenActive symbol is present

Return value

A dictionary for the different entry points where debug can happen, each containing a RecordAction to call.

Note that upon the initialisation all dictionaries but the :StartAlgorithm one are called with an i=0 for reset.

source
Manopt.RecordGroupFactoryMethod
RecordGroupFactory(s::AbstractManoptSolverState, a)

Generate a [RecordGroup] of RecordActions. The following rules are used

  1. Any Symbol contained in a is passed to RecordActionFactory
  2. Any RecordAction is included as is.

Any Pair of a RecordAction and a symbol, that is in order RecordCost() => :A is handled, that the corresponding record action can later be accessed as g[:A], where gis the record group generated here.

If this results in more than one RecordAction a RecordGroup of these is build.

If any integers are present, the last of these is used to wrap the group in a RecordEvery(k).

If :WhenActive is present, the resulting Action is wrapped in RecordWhenActive, making it deactivatable by its parent solver.

source
Manopt.set_parameter!Method
set_parameter!(ams::RecordSolverState, ::Val{:Record}, args...)

Set certain values specified by args... into the elements of the recordDictionary

source

Further specific RecordActions can be found when specific types of AbstractManoptSolverState define them on their corresponding site.

Technical details

Manopt.initialize_solver!Method
initialize_solver!(ams::AbstractManoptProblem, rss::RecordSolverState)

Extend the initialization of the solver by a hook to run records that were added to the :Start entry.

source
Manopt.step_solver!Method
step_solver!(amp::AbstractManoptProblem, rss::RecordSolverState, k)

Extend the ith step of the solver by a hook to run records, that were added to the :Iteration entry.

source
Manopt.stop_solver!Method
stop_solver!(amp::AbstractManoptProblem, rss::RecordSolverStatek k)

Extend the call to the stopping criterion by a hook to run records, that were added to the :Stop entry.

source
diff --git a/dev/plans/state/index.html b/dev/plans/state/index.html index a19037bd51..3ed8df79ae 100644 --- a/dev/plans/state/index.html +++ b/dev/plans/state/index.html @@ -1,2 +1,2 @@ -Solver State · Manopt.jl

Solver state

Given an AbstractManoptProblem, that is a certain optimisation task, the state specifies the solver to use. It contains the parameters of a solver and all fields necessary during the algorithm, for example the current iterate, a StoppingCriterion or a Stepsize.

Manopt.AbstractManoptSolverStateType
AbstractManoptSolverState

A general super type for all solver states.

Fields

The following fields are assumed to be default. If you use different ones, adapt the the access functions get_iterate and get_stopping_criterion accordingly

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
source
Manopt.get_stateFunction
get_state(s::AbstractManoptSolverState, recursive::Bool=true)

return the (one step) undecorated AbstractManoptSolverState of the (possibly) decorated s. As long as your decorated state stores the state within s.state and the dispatch_objective_decorator is set to Val{true}, the internal state are extracted automatically.

By default the state that is stored within a decorated state is assumed to be at s.state. Overwrite _get_state(s, ::Val{true}, recursive) to change this behaviour for your states` for both the recursive and the direct case.

If recursive is set to false, only the most outer decorator is taken away instead of all.

source
Manopt.get_countFunction
get_count(ams::AbstractManoptSolverState, ::Symbol)

Obtain the count for a certain countable size, for example the :Iterations. This function returns 0 if there was nothing to count

Available symbols from within the solver state

  • :Iterations is passed on to the stop field to obtain the iteration at which the solver stopped.
source
get_count(co::ManifoldCountObjective, s::Symbol, mode::Symbol=:None)

Get the number of counts for a certain symbol s.

Depending on the mode different results appear if the symbol does not exist in the dictionary

  • :None: (default) silent mode, returns -1 for non-existing entries
  • :warn: issues a warning if a field does not exist
  • :error: issues an error if a field does not exist
source

Since every subtype of an AbstractManoptSolverState directly relate to a solver, the concrete states are documented together with the corresponding solvers. This page documents the general features available for every state.

A first example is to obtain or set, the current iterate. This might be useful to continue investigation at the current iterate, or to set up a solver for a next experiment, respectively.

Manopt.get_iterateFunction
get_iterate(O::AbstractManoptSolverState)

return the (last stored) iterate within AbstractManoptSolverStates`. This should usually refer to a single point on the manifold the solver is working on

By default this also removes all decorators of the state beforehand.

source
get_iterate(agst::AbstractGradientSolverState)

return the iterate stored within gradient options. THe default returns agst.p.

source

An internal function working on the state and elements within a state is used to pass messages from (sub) activities of a state to the corresponding DebugMessages

Manopt.get_messageFunction
get_message(du::AbstractManoptSolverState)

get a message (String) from internal functors, in a summary. This should return any message a sub-step might have issued as well.

source

Furthermore, to access the stopping criterion use

Decorators for AbstractManoptSolverStates

A solver state can be decorated using the following trait and function to initialize

Manopt.dispatch_state_decoratorFunction
dispatch_state_decorator(s::AbstractManoptSolverState)

Indicate internally, whether an AbstractManoptSolverState s is of decorating type, and stores (encapsulates) a state in itself, by default in the field s.state.

Decorators indicate this by returning Val{true} for further dispatch.

The default is Val{false}, so by default a state is not decorated.

source
Manopt.decorate_state!Function
decorate_state!(s::AbstractManoptSolverState)

decorate the AbstractManoptSolverStates with specific decorators.

Optional arguments

optional arguments provide necessary details on the decorators.

  • debug=Array{Union{Symbol,DebugAction,String,Int},1}(): a set of symbols representing DebugActions, Strings used as dividers and a sub-sampling integer. These are passed as a DebugGroup within :Iteration to the DebugSolverState decorator dictionary. Only exception is :Stop that is passed to :Stop.
  • record=Array{Union{Symbol,RecordAction,Int},1}(): specify recordings by using Symbols or RecordActions directly. An integer can again be used for only recording every $i$th iteration.
  • return_state=false: indicate whether to wrap the options in a ReturnSolverState, indicating that the solver should return options and not (only) the minimizer.

other keywords are ignored.

See also

DebugSolverState, RecordSolverState, ReturnSolverState

source

A simple example is the

as well as DebugSolverState and RecordSolverState.

State actions

A state action is a struct for callback functions that can be attached within for example the just mentioned debug decorator or the record decorator.

Several state decorators or actions might store intermediate values like the (last) iterate to compute some change or the last gradient. In order to minimise the storage of these, there is a generic StoreStateAction that acts as generic common storage that can be shared among different actions.

Manopt.StoreStateActionType
StoreStateAction <: AbstractStateAction

internal storage for AbstractStateActions to store a tuple of fields from an AbstractManoptSolverStates

This functor possesses the usual interface of functions called during an iteration and acts on (p, s, k), where p is a AbstractManoptProblem, s is an AbstractManoptSolverState and k is the current iteration.

Fields

  • values: a dictionary to store interim values based on certain Symbols
  • keys: a Vector of Symbols to refer to fields of AbstractManoptSolverState
  • point_values: a NamedTuple of mutable values of points on a manifold to be stored in StoreStateAction. Manifold is later determined by AbstractManoptProblem passed to update_storage!.
  • point_init: a NamedTuple of boolean values indicating whether a point in point_values with matching key has been already initialized to a value. When it is false, it corresponds to a general value not being stored for the key present in the vector keys.
  • vector_values: a NamedTuple of mutable values of tangent vectors on a manifold to be stored in StoreStateAction. Manifold is later determined by AbstractManoptProblem passed to update_storage!. It is not specified at which point the vectors are tangent but for storage it should not matter.
  • vector_init: a NamedTuple of boolean values indicating whether a tangent vector in vector_values: with matching key has been already initialized to a value. When it is false, it corresponds to a general value not being stored for the key present in the vector keys.
  • once: whether to update the internal values only once per iteration
  • lastStored: last iterate, where this AbstractStateAction was called (to determine once)

To handle the general storage, use get_storage and has_storage with keys as Symbols. For the point storage use PointStorageKey. For tangent vector storage use VectorStorageKey. Point and tangent storage have been optimized to be more efficient.

Constructors

StoreStateAction(s::Vector{Symbol})

This is equivalent as providing s to the keyword store_fields, just that here, no manifold is necessity for the construction.

StoreStateAction(M)

Keyword arguments

  • store_fields (Symbol[])
  • store_points (Symbol[])
  • store_vectors (Symbol[])

as vectors of symbols each referring to fields of the state (lower case symbols) or semantic ones (upper case).

  • p_init (rand(M)) but making sure this is not a number but a (mutatable) array
  • X_init (zero_vector(M, p_init))

are used to initialize the point and vector storage, change these if you use other types (than the default) for your points/vectors on M.

  • once (true) whether to update internal storage only once per iteration or on every update call
source
Manopt.get_storageFunction
get_storage(a::AbstractStateAction, key::Symbol)

Return the internal value of the AbstractStateAction a at the Symbol key.

source
get_storage(a::AbstractStateAction, ::PointStorageKey{key}) where {key}

Return the internal value of the AbstractStateAction a at the Symbol key that represents a point.

source
get_storage(a::AbstractStateAction, ::VectorStorageKey{key}) where {key}

Return the internal value of the AbstractStateAction a at the Symbol key that represents a vector.

source
Manopt.has_storageFunction
has_storage(a::AbstractStateAction, key::Symbol)

Return whether the AbstractStateAction a has a value stored at the Symbol key.

source
has_storage(a::AbstractStateAction, ::PointStorageKey{key}) where {key}

Return whether the AbstractStateAction a has a point value stored at the Symbol key.

source
has_storage(a::AbstractStateAction, ::VectorStorageKey{key}) where {key}

Return whether the AbstractStateAction a has a point value stored at the Symbol key.

source
Manopt.update_storage!Function
update_storage!(a::AbstractStateAction, amp::AbstractManoptProblem, s::AbstractManoptSolverState)

Update the AbstractStateAction a internal values to the ones given on the AbstractManoptSolverState s. Optimized using the information from amp

source
update_storage!(a::AbstractStateAction, d::Dict{Symbol,<:Any})

Update the AbstractStateAction a internal values to the ones given in the dictionary d. The values are merged, where the values from d are preferred.

source

as well as two internal functions

Abstract states

In a few cases it is useful to have a hierarchy of types. These are

For the sub problem state, there are two access functions

Manopt.get_sub_problemFunction
get_sub_problem(ams::AbstractSubProblemSolverState)

Access the sub problem of a solver state that involves a sub optimisation task. By default this returns ams.sub_problem.

source
Manopt.get_sub_stateFunction
get_sub_state(ams::AbstractSubProblemSolverState)

Access the sub state of a solver state that involves a sub optimisation task. By default this returns ams.sub_state.

source
+Solver State · Manopt.jl

Solver state

Given an AbstractManoptProblem, that is a certain optimisation task, the state specifies the solver to use. It contains the parameters of a solver and all fields necessary during the algorithm, for example the current iterate, a StoppingCriterion or a Stepsize.

Manopt.AbstractManoptSolverStateType
AbstractManoptSolverState

A general super type for all solver states.

Fields

The following fields are assumed to be default. If you use different ones, adapt the the access functions get_iterate and get_stopping_criterion accordingly

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
source
Manopt.get_stateFunction
get_state(s::AbstractManoptSolverState, recursive::Bool=true)

return the (one step) undecorated AbstractManoptSolverState of the (possibly) decorated s. As long as your decorated state stores the state within s.state and the dispatch_objective_decorator is set to Val{true}, the internal state are extracted automatically.

By default the state that is stored within a decorated state is assumed to be at s.state. Overwrite _get_state(s, ::Val{true}, recursive) to change this behaviour for your states` for both the recursive and the direct case.

If recursive is set to false, only the most outer decorator is taken away instead of all.

source
Manopt.get_countFunction
get_count(ams::AbstractManoptSolverState, ::Symbol)

Obtain the count for a certain countable size, for example the :Iterations. This function returns 0 if there was nothing to count

Available symbols from within the solver state

  • :Iterations is passed on to the stop field to obtain the iteration at which the solver stopped.
source
get_count(co::ManifoldCountObjective, s::Symbol, mode::Symbol=:None)

Get the number of counts for a certain symbol s.

Depending on the mode different results appear if the symbol does not exist in the dictionary

  • :None: (default) silent mode, returns -1 for non-existing entries
  • :warn: issues a warning if a field does not exist
  • :error: issues an error if a field does not exist
source

Since every subtype of an AbstractManoptSolverState directly relate to a solver, the concrete states are documented together with the corresponding solvers. This page documents the general features available for every state.

A first example is to obtain or set, the current iterate. This might be useful to continue investigation at the current iterate, or to set up a solver for a next experiment, respectively.

Manopt.get_iterateFunction
get_iterate(O::AbstractManoptSolverState)

return the (last stored) iterate within AbstractManoptSolverStates`. This should usually refer to a single point on the manifold the solver is working on

By default this also removes all decorators of the state beforehand.

source
get_iterate(agst::AbstractGradientSolverState)

return the iterate stored within gradient options. THe default returns agst.p.

source

An internal function working on the state and elements within a state is used to pass messages from (sub) activities of a state to the corresponding DebugMessages

Manopt.get_messageFunction
get_message(du::AbstractManoptSolverState)

get a message (String) from internal functors, in a summary. This should return any message a sub-step might have issued as well.

source

Furthermore, to access the stopping criterion use

Decorators for AbstractManoptSolverStates

A solver state can be decorated using the following trait and function to initialize

Manopt.dispatch_state_decoratorFunction
dispatch_state_decorator(s::AbstractManoptSolverState)

Indicate internally, whether an AbstractManoptSolverState s is of decorating type, and stores (encapsulates) a state in itself, by default in the field s.state.

Decorators indicate this by returning Val{true} for further dispatch.

The default is Val{false}, so by default a state is not decorated.

source
Manopt.decorate_state!Function
decorate_state!(s::AbstractManoptSolverState)

decorate the AbstractManoptSolverStates with specific decorators.

Optional arguments

optional arguments provide necessary details on the decorators.

  • debug=Array{Union{Symbol,DebugAction,String,Int},1}(): a set of symbols representing DebugActions, Strings used as dividers and a sub-sampling integer. These are passed as a DebugGroup within :Iteration to the DebugSolverState decorator dictionary. Only exception is :Stop that is passed to :Stop.
  • record=Array{Union{Symbol,RecordAction,Int},1}(): specify recordings by using Symbols or RecordActions directly. An integer can again be used for only recording every $i$th iteration.
  • return_state=false: indicate whether to wrap the options in a ReturnSolverState, indicating that the solver should return options and not (only) the minimizer.

other keywords are ignored.

See also

DebugSolverState, RecordSolverState, ReturnSolverState

source

A simple example is the

as well as DebugSolverState and RecordSolverState.

State actions

A state action is a struct for callback functions that can be attached within for example the just mentioned debug decorator or the record decorator.

Several state decorators or actions might store intermediate values like the (last) iterate to compute some change or the last gradient. In order to minimise the storage of these, there is a generic StoreStateAction that acts as generic common storage that can be shared among different actions.

Manopt.StoreStateActionType
StoreStateAction <: AbstractStateAction

internal storage for AbstractStateActions to store a tuple of fields from an AbstractManoptSolverStates

This functor possesses the usual interface of functions called during an iteration and acts on (p, s, k), where p is a AbstractManoptProblem, s is an AbstractManoptSolverState and k is the current iteration.

Fields

  • values: a dictionary to store interim values based on certain Symbols
  • keys: a Vector of Symbols to refer to fields of AbstractManoptSolverState
  • point_values: a NamedTuple of mutable values of points on a manifold to be stored in StoreStateAction. Manifold is later determined by AbstractManoptProblem passed to update_storage!.
  • point_init: a NamedTuple of boolean values indicating whether a point in point_values with matching key has been already initialized to a value. When it is false, it corresponds to a general value not being stored for the key present in the vector keys.
  • vector_values: a NamedTuple of mutable values of tangent vectors on a manifold to be stored in StoreStateAction. Manifold is later determined by AbstractManoptProblem passed to update_storage!. It is not specified at which point the vectors are tangent but for storage it should not matter.
  • vector_init: a NamedTuple of boolean values indicating whether a tangent vector in vector_values: with matching key has been already initialized to a value. When it is false, it corresponds to a general value not being stored for the key present in the vector keys.
  • once: whether to update the internal values only once per iteration
  • lastStored: last iterate, where this AbstractStateAction was called (to determine once)

To handle the general storage, use get_storage and has_storage with keys as Symbols. For the point storage use PointStorageKey. For tangent vector storage use VectorStorageKey. Point and tangent storage have been optimized to be more efficient.

Constructors

StoreStateAction(s::Vector{Symbol})

This is equivalent as providing s to the keyword store_fields, just that here, no manifold is necessity for the construction.

StoreStateAction(M)

Keyword arguments

  • store_fields (Symbol[])
  • store_points (Symbol[])
  • store_vectors (Symbol[])

as vectors of symbols each referring to fields of the state (lower case symbols) or semantic ones (upper case).

  • p_init (rand(M)) but making sure this is not a number but a (mutatable) array
  • X_init (zero_vector(M, p_init))

are used to initialize the point and vector storage, change these if you use other types (than the default) for your points/vectors on M.

  • once (true) whether to update internal storage only once per iteration or on every update call
source
Manopt.get_storageFunction
get_storage(a::AbstractStateAction, key::Symbol)

Return the internal value of the AbstractStateAction a at the Symbol key.

source
get_storage(a::AbstractStateAction, ::PointStorageKey{key}) where {key}

Return the internal value of the AbstractStateAction a at the Symbol key that represents a point.

source
get_storage(a::AbstractStateAction, ::VectorStorageKey{key}) where {key}

Return the internal value of the AbstractStateAction a at the Symbol key that represents a vector.

source
Manopt.has_storageFunction
has_storage(a::AbstractStateAction, key::Symbol)

Return whether the AbstractStateAction a has a value stored at the Symbol key.

source
has_storage(a::AbstractStateAction, ::PointStorageKey{key}) where {key}

Return whether the AbstractStateAction a has a point value stored at the Symbol key.

source
has_storage(a::AbstractStateAction, ::VectorStorageKey{key}) where {key}

Return whether the AbstractStateAction a has a point value stored at the Symbol key.

source
Manopt.update_storage!Function
update_storage!(a::AbstractStateAction, amp::AbstractManoptProblem, s::AbstractManoptSolverState)

Update the AbstractStateAction a internal values to the ones given on the AbstractManoptSolverState s. Optimized using the information from amp

source
update_storage!(a::AbstractStateAction, d::Dict{Symbol,<:Any})

Update the AbstractStateAction a internal values to the ones given in the dictionary d. The values are merged, where the values from d are preferred.

source

as well as two internal functions

Abstract states

In a few cases it is useful to have a hierarchy of types. These are

For the sub problem state, there are two access functions

Manopt.get_sub_problemFunction
get_sub_problem(ams::AbstractSubProblemSolverState)

Access the sub problem of a solver state that involves a sub optimisation task. By default this returns ams.sub_problem.

source
Manopt.get_sub_stateFunction
get_sub_state(ams::AbstractSubProblemSolverState)

Access the sub state of a solver state that involves a sub optimisation task. By default this returns ams.sub_state.

source
diff --git a/dev/plans/stepsize/index.html b/dev/plans/stepsize/index.html index b01fe5e399..7fcde98076 100644 --- a/dev/plans/stepsize/index.html +++ b/dev/plans/stepsize/index.html @@ -1,38 +1,38 @@ -Stepsize · Manopt.jl

Stepsize and line search

Most iterative algorithms determine a direction along which the algorithm shall proceed and determine a step size to find the next iterate. How advanced the step size computation can be implemented depends (among others) on the properties the corresponding problem provides.

Within Manopt.jl, the step size determination is implemented as a functor which is a subtype of Stepsize based on

Manopt.StepsizeType
Stepsize

An abstract type for the functors representing step sizes. These are callable structures. The naming scheme is TypeOfStepSize, for example ConstantStepsize.

Every Stepsize has to provide a constructor and its function has to have the interface (p,o,i) where a AbstractManoptProblem as well as AbstractManoptSolverState and the current number of iterations are the arguments and returns a number, namely the stepsize to use.

For most it is adviable to employ a ManifoldDefaultsFactory. Then the function creating the factory should either be called TypeOf or if that is confusing or too generic, TypeOfLength

See also

Linesearch

source

Usually, a constructor should take the manifold M as its first argument, for consistency, to allow general step size functors to be set up based on default values that might depend on the manifold currently under consideration.

Currently, the following step sizes are available

Manopt.AdaptiveWNGradientFunction
AdaptiveWNGradient(; kwargs...)
+Stepsize · Manopt.jl

Stepsize and line search

Most iterative algorithms determine a direction along which the algorithm shall proceed and determine a step size to find the next iterate. How advanced the step size computation can be implemented depends (among others) on the properties the corresponding problem provides.

Within Manopt.jl, the step size determination is implemented as a functor which is a subtype of Stepsize based on

Manopt.StepsizeType
Stepsize

An abstract type for the functors representing step sizes. These are callable structures. The naming scheme is TypeOfStepSize, for example ConstantStepsize.

Every Stepsize has to provide a constructor and its function has to have the interface (p,o,i) where a AbstractManoptProblem as well as AbstractManoptSolverState and the current number of iterations are the arguments and returns a number, namely the stepsize to use.

For most it is adviable to employ a ManifoldDefaultsFactory. Then the function creating the factory should either be called TypeOf or if that is confusing or too generic, TypeOfLength

See also

Linesearch

source

Usually, a constructor should take the manifold M as its first argument, for consistency, to allow general step size functors to be set up based on default values that might depend on the manifold currently under consideration.

Currently, the following step sizes are available

Manopt.AdaptiveWNGradientFunction
AdaptiveWNGradient(; kwargs...)
 AdaptiveWNGradient(M::AbstractManifold; kwargs...)

A stepsize based on the adaptive gradient method introduced by [GS23].

Given a positive threshold $\hat{c} ∈ ℕ$, an minimal bound $b_{\text{min}} > 0$, an initial $b_0 ≥ b_{\text{min}}$, and a gradient reduction factor threshold $α ∈ [0,1)$.

Set $c_0=0$ and use $ω_0 = \lVert \operatorname{grad} f(p_0) \rVert_{p_0}$.

For the first iterate use the initial step size $s_0 = \frac{1}{b_0}$.

Then, given the last gradient $X_{k-1} = \operatorname{grad} f(x_{k-1})$, and a previous $ω_{k-1}$, the values $(b_k, ω_k, c_k)$ are computed using $X_k = \operatorname{grad} f(p_k)$ and the following cases

If $\lVert X_k \rVert_{p_k} ≤ αω_{k-1}$, then let $\hat{b}_{k-1} ∈ [b_{\text{min}},b_{k-1}]$ and set

\[(b_k, ω_k, c_k) = \begin{cases} \bigl(\hat{b}_{k-1}, \lVert X_k \rVert_{p_k}, 0 \bigr) & \text{ if } c_{k-1}+1 = \hat{c}\\ \bigl( b_{k-1} + \frac{\lVert X_k \rVert_{p_k}^2}{b_{k-1}}, ω_{k-1}, c_{k-1}+1 \Bigr) & \text{ if } c_{k-1}+1<\hat{c} -\end{cases}\]

If $\lVert X_k \rVert_{p_k} > αω_{k-1}$, the set

\[(b_k, ω_k, c_k) = \Bigl( b_{k-1} + \frac{\lVert X_k \rVert_{p_k}^2}{b_{k-1}}, ω_{k-1}, 0 \Bigr)\]

and return the step size $s_k = \frac{1}{b_k}$.

Note that for $α=0$ this is the Riemannian variant of WNGRad.

Keyword arguments

  • adaptive=true: switches the gradient_reductionα(iftrue) to0`.
  • alternate_bound = (bk, hat_c) -> min(gradient_bound == 0 ? 1.0 : gradient_bound, max(minimal_bound, bk / (3 * hat_c)): how to determine $\hat{k}_k$ as a function of (bmin, bk, hat_c) -> hat_bk
  • count_threshold=4: an Integer for $\hat{c}$
  • gradient_reduction::R=adaptive ? 0.9 : 0.0: the gradient reduction factor threshold $α ∈ [0,1)$
  • gradient_bound=norm(M, p, X): the bound $b_k$.
  • minimal_bound=1e-4: the value $b_{\text{min}}$
  • p=rand(M): a point on the manifold $\mathcal M$only used to define the gradient_bound
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$only used to define the gradient_bound
source
Manopt.ArmijoLinesearchFunction
ArmijoLinesearch(; kwargs...)
-ArmijoLinesearch(M::AbstractManifold; kwargs...)

Specify a step size that performs an Armijo line search. Given a Function $f:\mathcal M→ℝ$ and its Riemannian Gradient $\operatorname{grad}f: \mathcal M→T\mathcal M$, the curent point $p∈\mathcal M$ and a search direction $X∈T_{p}\mathcal M$.

Then the step size $s$ is found by reducing the initial step size $s$ until

\[f(\operatorname{retr}_p(sX)) ≤ f(p) - τs ⟨ X, \operatorname{grad}f(p) ⟩_p\]

is fulfilled. for a sufficient decrease value $τ ∈ (0,1)$.

To be a bit more optimistic, if $s$ already fulfils this, a first search is done, increasing the given $s$ until for a first time this step does not hold.

Overall, we look for step size, that provides enough decrease, see [Bou23, p. 58] for more information.

Keyword arguments

  • additional_decrease_condition=(M, p) -> true: specify an additional criterion that has to be met to accept a step size in the decreasing loop
  • additional_increase_condition::IF=(M, p) -> true: specify an additional criterion that has to be met to accept a step size in the (initial) increase loop
  • candidate_point=allocate_result(M, rand): speciy a point to be used as memory for the candidate points.
  • contraction_factor=0.95: how to update $s$ in the decrease step
  • initial_stepsize=1.0`: specify an initial step size
  • initial_guess=armijo_initial_guess: Compute the initial step size of a line search based on this function. The funtion required is (p,s,k,l) -> α and computes the initial step size $α$ based on a AbstractManoptProblem p, AbstractManoptSolverState s, the current iterate k and a last step size l.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less=0.0: a safeguard, stop when the decreasing step is below this (nonnegative) bound.
  • stop_when_stepsize_exceeds=max_stepsize(M): a safeguard to not choose a too long step size when initially increasing
  • stop_increasing_at_step=100: stop the initial increasing loop after this amount of steps. Set to 0 to never increase in the beginning
  • stop_decreasing_at_step=1000: maximal number of Armijo decreases / tests to perform
  • sufficient_decrease=0.1: the sufficient decrease parameter $τ$

For the stop safe guards you can pass :Messages to a debug= to see @info messages when these happen.

Info

This function generates a ManifoldDefaultsFactory for ArmijoLinesearchStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.ConstantLengthFunction
ConstantLength(s; kwargs...)
-ConstantLength(M::AbstractManifold, s; kwargs...)

Specify a Stepsize that is constant.

Input

  • M (optional)

s=min( injectivity_radius(M)/2, 1.0) : the length to use.

Keyword argument

  • type::Symbol=relative specify the type of constant step size.
    • :relative – scale the gradient tangent vector $X$ to $s*X$
    • :absolute – scale the gradient to an absolute step length $s$, that is $\frac{s}{\lVert X \rVert_{}}X$
Info

This function generates a ManifoldDefaultsFactory for ConstantStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.DecreasingLengthFunction
DegreasingLength(; kwargs...)
-DecreasingLength(M::AbstractManifold; kwargs...)

Specify a [Stepsize] that is decreasing as ``s_k = \frac{(l - ak)f^i}{(k+s)^e} with the following

Keyword arguments

  • exponent=1.0: the exponent $e$ in the denominator
  • factor=1.0: the factor $f$ in the nominator
  • length=min(injectivity_radius(M)/2, 1.0): the initial step size $l$.
  • subtrahend=0.0: a value $a$ that is subtracted every iteration
  • shift=0.0: shift the denominator iterator $k$ by $s$.
  • type::Symbol=relative specify the type of constant step size.
  • :relative – scale the gradient tangent vector $X$ to $s_k*X$
  • :absolute – scale the gradient to an absolute step length $s_k$, that is $\frac{s_k}{\lVert X \rVert_{}}X$
Info

This function generates a ManifoldDefaultsFactory for DecreasingStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.NonmonotoneLinesearchFunction
NonmonotoneLinesearch(; kwargs...)
+\end{cases}\]

If $\lVert X_k \rVert_{p_k} > αω_{k-1}$, the set

\[(b_k, ω_k, c_k) = \Bigl( b_{k-1} + \frac{\lVert X_k \rVert_{p_k}^2}{b_{k-1}}, ω_{k-1}, 0 \Bigr)\]

and return the step size $s_k = \frac{1}{b_k}$.

Note that for $α=0$ this is the Riemannian variant of WNGRad.

Keyword arguments

  • adaptive=true: switches the gradient_reductionα(iftrue) to0`.
  • alternate_bound = (bk, hat_c) -> min(gradient_bound == 0 ? 1.0 : gradient_bound, max(minimal_bound, bk / (3 * hat_c)): how to determine $\hat{k}_k$ as a function of (bmin, bk, hat_c) -> hat_bk
  • count_threshold=4: an Integer for $\hat{c}$
  • gradient_reduction::R=adaptive ? 0.9 : 0.0: the gradient reduction factor threshold $α ∈ [0,1)$
  • gradient_bound=norm(M, p, X): the bound $b_k$.
  • minimal_bound=1e-4: the value $b_{\text{min}}$
  • p=rand(M): a point on the manifold $\mathcal M$only used to define the gradient_bound
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$only used to define the gradient_bound
source
Manopt.ArmijoLinesearchFunction
ArmijoLinesearch(; kwargs...)
+ArmijoLinesearch(M::AbstractManifold; kwargs...)

Specify a step size that performs an Armijo line search. Given a Function $f:\mathcal M→ℝ$ and its Riemannian Gradient $\operatorname{grad}f: \mathcal M→T\mathcal M$, the curent point $p∈\mathcal M$ and a search direction $X∈T_{p}\mathcal M$.

Then the step size $s$ is found by reducing the initial step size $s$ until

\[f(\operatorname{retr}_p(sX)) ≤ f(p) - τs ⟨ X, \operatorname{grad}f(p) ⟩_p\]

is fulfilled. for a sufficient decrease value $τ ∈ (0,1)$.

To be a bit more optimistic, if $s$ already fulfils this, a first search is done, increasing the given $s$ until for a first time this step does not hold.

Overall, we look for step size, that provides enough decrease, see [Bou23, p. 58] for more information.

Keyword arguments

  • additional_decrease_condition=(M, p) -> true: specify an additional criterion that has to be met to accept a step size in the decreasing loop
  • additional_increase_condition::IF=(M, p) -> true: specify an additional criterion that has to be met to accept a step size in the (initial) increase loop
  • candidate_point=allocate_result(M, rand): speciy a point to be used as memory for the candidate points.
  • contraction_factor=0.95: how to update $s$ in the decrease step
  • initial_stepsize=1.0`: specify an initial step size
  • initial_guess=armijo_initial_guess: Compute the initial step size of a line search based on this function. The funtion required is (p,s,k,l) -> α and computes the initial step size $α$ based on a AbstractManoptProblem p, AbstractManoptSolverState s, the current iterate k and a last step size l.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less=0.0: a safeguard, stop when the decreasing step is below this (nonnegative) bound.
  • stop_when_stepsize_exceeds=max_stepsize(M): a safeguard to not choose a too long step size when initially increasing
  • stop_increasing_at_step=100: stop the initial increasing loop after this amount of steps. Set to 0 to never increase in the beginning
  • stop_decreasing_at_step=1000: maximal number of Armijo decreases / tests to perform
  • sufficient_decrease=0.1: the sufficient decrease parameter $τ$

For the stop safe guards you can pass :Messages to a debug= to see @info messages when these happen.

Info

This function generates a ManifoldDefaultsFactory for ArmijoLinesearchStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.ConstantLengthFunction
ConstantLength(s; kwargs...)
+ConstantLength(M::AbstractManifold, s; kwargs...)

Specify a Stepsize that is constant.

Input

  • M (optional)

s=min( injectivity_radius(M)/2, 1.0) : the length to use.

Keyword argument

  • type::Symbol=relative specify the type of constant step size.
    • :relative – scale the gradient tangent vector $X$ to $s*X$
    • :absolute – scale the gradient to an absolute step length $s$, that is $\frac{s}{\lVert X \rVert_{}}X$
Info

This function generates a ManifoldDefaultsFactory for ConstantStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.DecreasingLengthFunction
DegreasingLength(; kwargs...)
+DecreasingLength(M::AbstractManifold; kwargs...)

Specify a [Stepsize] that is decreasing as ``s_k = \frac{(l - ak)f^i}{(k+s)^e} with the following

Keyword arguments

  • exponent=1.0: the exponent $e$ in the denominator
  • factor=1.0: the factor $f$ in the nominator
  • length=min(injectivity_radius(M)/2, 1.0): the initial step size $l$.
  • subtrahend=0.0: a value $a$ that is subtracted every iteration
  • shift=0.0: shift the denominator iterator $k$ by $s$.
  • type::Symbol=relative specify the type of constant step size.
  • :relative – scale the gradient tangent vector $X$ to $s_k*X$
  • :absolute – scale the gradient to an absolute step length $s_k$, that is $\frac{s_k}{\lVert X \rVert_{}}X$
Info

This function generates a ManifoldDefaultsFactory for DecreasingStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.NonmonotoneLinesearchFunction
NonmonotoneLinesearch(; kwargs...)
 NonmonotoneLinesearch(M::AbstractManifold; kwargs...)

A functor representing a nonmonotone line search using the Barzilai-Borwein step size [IP17].

This method first computes

(x -> p, F-> f)

\[y_{k} = \operatorname{grad}f(p_{k}) - \mathcal T_{p_k←p_{k-1}}\operatorname{grad}f(p_{k-1})\]

and

\[s_{k} = - α_{k-1} ⋅ \mathcal T_{p_k←p_{k-1}}\operatorname{grad}f(p_{k-1}),\]

where $α_{k-1}$ is the step size computed in the last iteration and $\mathcal T_{⋅←⋅}$ is a vector transport. Then the Barzilai—Borwein step size is

\[α_k^{\text{BB}} = \begin{cases} \min(α_{\text{max}}, \max(α_{\text{min}}, τ_{k})), & \text{if} ⟨s_{k}, y_{k}⟩_{p_k} > 0,\\ α_{\text{max}}, & \text{else,} \end{cases}\]

where

\[τ_{k} = \frac{⟨s_{k}, s_{k}⟩_{p_k}}{⟨s_{k}, y_{k}⟩_{p_k}},\]

if the direct strategy is chosen, or

\[τ_{k} = \frac{⟨s_{k}, y_{k}⟩_{p_k}}{⟨y_{k}, y_{k}⟩_{p_k}},\]

in case of the inverse strategy or an alternation between the two in cases for the alternating strategy. Then find the smallest $h = 0, 1, 2, …$ such that

\[f(\operatorname{retr}_{p_k}(- σ^h α_k^{\text{BB}} \operatorname{grad}f(p_k))) ≤ -\max_{1 ≤ j ≤ \max(k+1,m)} f(p_{k+1-j}) - γ σ^h α_k^{\text{BB}} ⟨\operatorname{grad}F(p_k), \operatorname{grad}F(p_k)⟩_{p_k},\]

where $σ ∈ (0,1)$ is a step length reduction factor , $m$ is the number of iterations after which the function value has to be lower than the current one and $γ ∈ (0,1)$ is the sufficient decrease parameter. Finally the step size is computed as

\[α_k = σ^h α_k^{\text{BB}}.\]

Keyword arguments

  • p=rand(M): a point on the manifold $\mathcal M$to store an interim result
  • p=allocate_result(M, rand): to store an interim result
  • initial_stepsize=1.0: the step size to start the search with
  • memory_size=10: number of iterations after which the cost value needs to be lower than the current one
  • bb_min_stepsize=1e-3: lower bound for the Barzilai-Borwein step size greater than zero
  • bb_max_stepsize=1e3: upper bound for the Barzilai-Borwein step size greater than min_stepsize
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • strategy=direct: defines if the new step size is computed using the :direct, :indirect or :alternating strategy
  • storage=StoreStateAction(M; store_fields=[:Iterate, :Gradient]): increase efficiency by using a StoreStateAction for :Iterate and :Gradient.
  • stepsize_reduction=0.5: step size reduction factor contained in the interval $(0,1)$
  • sufficient_decrease=1e-4: sufficient decrease parameter contained in the interval $(0,1)$
  • stop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)
  • stop_when_stepsize_exceeds=max_stepsize(M, p)): largest stepsize when to stop to avoid leaving the injectivity radius
  • stop_increasing_at_step=100: last step to increase the stepsize (phase 1),
  • stop_decreasing_at_step=1000: last step size to decrease the stepsize (phase 2),
source
Manopt.PolyakFunction
Polyak(; kwargs...)
-Polyak(M::AbstractManifold; kwargs...)

Compute a step size according to a method propsed by Polyak, cf. the Dynamic step size discussed in Section 3.2 of [Ber15]. This has been generalised here to both the Riemannian case and to approximate the minimum cost value.

Let $f_{\text{best}$ be the best cost value seen until now during some iterative optimisation algorithm and let $γ_k$ be a sequence of numbers that is square summable, but not summable.

Then the step size computed here reads

\[s_k = \frac{f(p^{(k)}) - f_{\text{best} + γ_k}{\lVert ∂f(p^{(k)})} \rVert_{}},\]

where $∂f$ denotes a nonzero-subgradient of $f$ at the current iterate $p^{(k)}$.

Constructor

Polyak(; γ = k -> 1/k, initial_cost_estimate=0.0)

initialize the Polyak stepsize to a certain sequence and an initial estimate of $f_{ ext{best}}$.

Info

This function generates a ManifoldDefaultsFactory for PolyakStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.WolfePowellLinesearchFunction
WolfePowellLinesearch(; kwargs...)
+\max_{1 ≤ j ≤ \max(k+1,m)} f(p_{k+1-j}) - γ σ^h α_k^{\text{BB}} ⟨\operatorname{grad}F(p_k), \operatorname{grad}F(p_k)⟩_{p_k},\]

where $σ ∈ (0,1)$ is a step length reduction factor , $m$ is the number of iterations after which the function value has to be lower than the current one and $γ ∈ (0,1)$ is the sufficient decrease parameter. Finally the step size is computed as

\[α_k = σ^h α_k^{\text{BB}}.\]

Keyword arguments

  • p=rand(M): a point on the manifold $\mathcal M$to store an interim result
  • p=allocate_result(M, rand): to store an interim result
  • initial_stepsize=1.0: the step size to start the search with
  • memory_size=10: number of iterations after which the cost value needs to be lower than the current one
  • bb_min_stepsize=1e-3: lower bound for the Barzilai-Borwein step size greater than zero
  • bb_max_stepsize=1e3: upper bound for the Barzilai-Borwein step size greater than min_stepsize
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • strategy=direct: defines if the new step size is computed using the :direct, :indirect or :alternating strategy
  • storage=StoreStateAction(M; store_fields=[:Iterate, :Gradient]): increase efficiency by using a StoreStateAction for :Iterate and :Gradient.
  • stepsize_reduction=0.5: step size reduction factor contained in the interval $(0,1)$
  • sufficient_decrease=1e-4: sufficient decrease parameter contained in the interval $(0,1)$
  • stop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)
  • stop_when_stepsize_exceeds=max_stepsize(M, p)): largest stepsize when to stop to avoid leaving the injectivity radius
  • stop_increasing_at_step=100: last step to increase the stepsize (phase 1),
  • stop_decreasing_at_step=1000: last step size to decrease the stepsize (phase 2),
source
Manopt.PolyakFunction
Polyak(; kwargs...)
+Polyak(M::AbstractManifold; kwargs...)

Compute a step size according to a method propsed by Polyak, cf. the Dynamic step size discussed in Section 3.2 of [Ber15]. This has been generalised here to both the Riemannian case and to approximate the minimum cost value.

Let $f_{\text{best}$ be the best cost value seen until now during some iterative optimisation algorithm and let $γ_k$ be a sequence of numbers that is square summable, but not summable.

Then the step size computed here reads

\[s_k = \frac{f(p^{(k)}) - f_{\text{best} + γ_k}{\lVert ∂f(p^{(k)})} \rVert_{}},\]

where $∂f$ denotes a nonzero-subgradient of $f$ at the current iterate $p^{(k)}$.

Constructor

Polyak(; γ = k -> 1/k, initial_cost_estimate=0.0)

initialize the Polyak stepsize to a certain sequence and an initial estimate of $f_{ ext{best}}$.

Info

This function generates a ManifoldDefaultsFactory for PolyakStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.WolfePowellLinesearchFunction
WolfePowellLinesearch(; kwargs...)
 WolfePowellLinesearch(M::AbstractManifold; kwargs...)

Perform a lineseach to fulfull both the Armijo-Goldstein conditions

\[f\bigl( \operatorname{retr}_{p}(αX) \bigr) ≤ f(p) + c_1 α_k ⟨\operatorname{grad} f(p), X⟩_{p}\]

as well as the Wolfe conditions

\[\frac{\mathrm{d}}{\mathrm{d}t} f\bigl(\operatorname{retr}_{p}(tX)\bigr) \Big\vert_{t=α} -≥ c_2 \frac{\mathrm{d}}{\mathrm{d}t} f\bigl(\operatorname{retr}_{p}(tX)\bigr)\Big\vert_{t=0}.\]

for some given sufficient decrease coefficient $c_1$ and some sufficient curvature condition coefficient$c_2$.

This is adopted from [NW06, Section 3.1]

Keyword arguments

  • sufficient_decrease=10^(-4)
  • sufficient_curvature=0.999
  • p::P: a point on the manifold $\mathcal M$as temporary storage for candidates
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$as type of memory allocated for the candidates direction and tangent
  • max_stepsize=max_stepsize(M, p): largest stepsize allowed here.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
source
Manopt.WolfePowellBinaryLinesearchFunction
WolfePowellBinaryLinesearch(; kwargs...)
+≥ c_2 \frac{\mathrm{d}}{\mathrm{d}t} f\bigl(\operatorname{retr}_{p}(tX)\bigr)\Big\vert_{t=0}.\]

for some given sufficient decrease coefficient $c_1$ and some sufficient curvature condition coefficient$c_2$.

This is adopted from [NW06, Section 3.1]

Keyword arguments

  • sufficient_decrease=10^(-4)
  • sufficient_curvature=0.999
  • p::P: a point on the manifold $\mathcal M$as temporary storage for candidates
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$as type of memory allocated for the candidates direction and tangent
  • max_stepsize=max_stepsize(M, p): largest stepsize allowed here.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
source
Manopt.WolfePowellBinaryLinesearchFunction
WolfePowellBinaryLinesearch(; kwargs...)
 WolfePowellBinaryLinesearch(M::AbstractManifold; kwargs...)

Perform a lineseach to fulfull both the Armijo-Goldstein conditions for some given sufficient decrease coefficient $c_1$ and some sufficient curvature condition coefficient$c_2$. Compared to WolfePowellLinesearch which tries a simpler method, this linesearch performs the following algorithm

With

\[A(t) = f(p_+) ≤ c_1 t ⟨\operatorname{grad}f(p), X⟩_{x} \quad\text{ and }\quad -W(t) = ⟨\operatorname{grad}f(x_+), \mathcal T_{p_+←p}X⟩_{p_+} ≥ c_2 ⟨X, \operatorname{grad}f(x)⟩_x,\]

where $p_+ =\operatorname{retr}_p(tX)$ is the current trial point, and $\mathcal T_{⋅←⋅}$ denotes a vector transport. Then the following Algorithm is performed similar to Algorithm 7 from [Hua14]

  1. set $α=0$, $β=∞$ and $t=1$.
  2. While either $A(t)$ does not hold or $W(t)$ does not hold do steps 3-5.
  3. If $A(t)$ fails, set $β=t$.
  4. If $A(t)$ holds but $W(t)$ fails, set $α=t$.
  5. If $β<∞$ set $t=\frac{α+β}{2}$, otherwise set $t=2α$.

Keyword arguments

source

Some step sizes use max_stepsize function as a rough upper estimate for the trust region size. It is by default equal to injectivity radius of the exponential map but in some cases a different value is used. For the FixedRankMatrices manifold an estimate from Manopt is used. Tangent bundle with the Sasaki metric has 0 injectivity radius, so the maximum stepsize of the underlying manifold is used instead. Hyperrectangle also has 0 injectivity radius and an estimate based on maximum of dimensions along each index is used instead. For manifolds with corners, however, a line search capable of handling break points along the projected search direction should be used, and such algorithms do not call max_stepsize.

Internally these step size functions create a ManifoldDefaultsFactory. Internally these use

Manopt.armijo_initial_guessMethod
armijo_initial_guess(mp::AbstractManoptProblem, s::AbstractManoptSolverState, k, l)

Input

Return an initial guess for the ArmijoLinesearchStepsize.

The default provided is based on the max_stepsize(M), which we denote by $m$. Let further $X$ be the current descent direction with norm $n=\lVert X \rVert_{p}$ its length. Then this (default) initial guess returns

  • $l$ if $m$ is not finite
  • $\min(l, \frac{m}{n})$ otherwise

This ensures that the initial guess does not yield to large (initial) steps.

source
Manopt.get_last_stepsizeMethod
get_last_stepsize(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, vars...)

return the last computed stepsize stored within AbstractManoptSolverState ams when solving the AbstractManoptProblem amp.

This method takes into account that ams might be decorated. In case this returns NaN, a concrete call to the stored stepsize is performed. For this, usually, the first of the vars... should be the current iterate.

source
Manopt.get_last_stepsizeMethod
get_last_stepsize(::Stepsize, vars...)

return the last computed stepsize from within the stepsize. If no last step size is stored, this returns NaN.

source
Manopt.linesearch_backtrackMethod
(s, msg) = linesearch_backtrack(M, F, p, X, s, decrease, contract η = -X, f0 = f(p); kwargs...)
-(s, msg) = linesearch_backtrack!(M, q, F, p, X, s, decrease, contract η = -X, f0 = f(p); kwargs...)

perform a line search

  • on manifold M
  • for the cost function f,
  • at the current point p
  • with current gradient provided in X
  • an initial stepsize s
  • a sufficient decrease
  • a contraction factor $σ$
  • a search direction $η = -X$
  • an offset, $f_0 = F(x)$

Keyword arguments

  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less=0.0: to avoid numerical underflow
  • stop_when_stepsize_exceeds=max_stepsize(M, p) / norm(M, p, η)) to avoid leaving the injectivity radius on a manifold
  • stop_increasing_at_step=100: stop the initial increase of step size after these many steps
  • stop_decreasing_at_step=1000`: stop the decreasing search after these many steps
  • additional_increase_condition=(M,p) -> true: impose an additional condition for an increased step size to be accepted
  • additional_decrease_condition=(M,p) -> true: impose an additional condition for an decreased step size to be accepted

These keywords are used as safeguards, where only the max stepsize is a very manifold specific one.

Return value

A stepsize s and a message msg (in case any of the 4 criteria hit)

source
Manopt.max_stepsizeMethod
max_stepsize(M::AbstractManifold, p)
-max_stepsize(M::AbstractManifold)

Get the maximum stepsize (at point p) on manifold M. It should be used to limit the distance an algorithm is trying to move in a single step.

By default, this returns injectivity_radius(M), if this exists. If this is not available on the the method returns Inf.

source
Manopt.AdaptiveWNGradientStepsizeType
AdaptiveWNGradientStepsize{I<:Integer,R<:Real,F<:Function} <: Stepsize

A functor problem, state, k, X) -> s to an adaptive gradient method introduced by [GrapigliaStella:2023](@cite). See [AdaptiveWNGradient`](@ref) for the mathematical details.

Fields

  • count_threshold::I: an Integer for $\hat{c}$
  • minimal_bound::R: the value for $b_{\text{min}}$
  • alternate_bound::F: how to determine $\hat{k}_k$ as a function of (bmin, bk, hat_c) -> hat_bk
  • gradient_reduction::R: the gradient reduction factor threshold $α ∈ [0,1)$
  • gradient_bound::R: the bound $b_k$.
  • weight::R: $ω_k$ initialised to $ω_0 =$norm(M, p, X) if this is not zero, 1.0 otherwise.
  • count::I: $c_k$, initialised to $c_0 = 0$.

Constructor

AdaptiveWNGrad(M::AbstractManifold; kwargs...)

Keyword arguments

  • adaptive=true: switches the gradient_reductionα(iftrue) to0`.
  • alternate_bound = (bk, hat_c) -> min(gradient_bound == 0 ? 1.0 : gradient_bound, max(minimal_bound, bk / (3 * hat_c))
  • count_threshold=4
  • gradient_reduction::R=adaptive ? 0.9 : 0.0
  • gradient_bound=norm(M, p, X)
  • minimal_bound=1e-4
  • p=rand(M): a point on the manifold $\mathcal M$only used to define the gradient_bound
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$only used to define the gradient_bound
source
Manopt.ArmijoLinesearchStepsizeType
ArmijoLinesearchStepsize <: Linesearch

A functor problem, state, k, X) -> s to provide an Armijo line search to compute step size, based on the search directionX`

Fields

  • candidate_point: to store an interim result
  • initial_stepsize: and initial step size
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • contraction_factor: exponent for line search reduction
  • sufficient_decrease: gain within Armijo's rule
  • last_stepsize: the last step size to start the search with
  • initial_guess: a function to provide an initial guess for the step size, it maps (m,p,k,l) -> α based on a AbstractManoptProblem p, AbstractManoptSolverState s, the current iterate k and a last step size l. It returns the initial guess α.
  • additional_decrease_condition: specify a condition a new point has to additionally fulfill. The default accepts all points.
  • additional_increase_condition: specify a condtion that additionally to checking a valid increase has to be fulfilled. The default accepts all points.
  • stop_when_stepsize_less: smallest stepsize when to stop (the last one before is taken)
  • stop_when_stepsize_exceeds: largest stepsize when to stop.
  • stop_increasing_at_step: last step to increase the stepsize (phase 1),
  • stop_decreasing_at_step: last step size to decrease the stepsize (phase 2),

Pass :Messages to a debug= to see @infos when these happen.

Constructor

ArmijoLinesearchStepsize(M::AbstractManifold; kwarg...)

with the fields keyword arguments and the retraction is set to the default retraction on M.

Keyword arguments

  • candidate_point=(allocate_result(M, rand))
  • initial_stepsize=1.0
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • contraction_factor=0.95
  • sufficient_decrease=0.1
  • last_stepsize=initialstepsize
  • initial_guess=armijo_initial_guess– (p,s,i,l) -> l
  • stop_when_stepsize_less=0.0: stop when the stepsize decreased below this version.
  • stop_when_stepsize_exceeds=[max_step](@ref)(M)`: provide an absolute maximal step size.
  • stop_increasing_at_step=100: for the initial increase test, stop after these many steps
  • stop_decreasing_at_step=1000: in the backtrack, stop after these many steps
source
Manopt.ConstantStepsizeType
ConstantStepsize <: Stepsize

A functor (problem, state, ...) -> s to provide a constant step size s.

Fields

  • length: constant value for the step size
  • type: a symbol that indicates whether the stepsize is relatively (:relative), with respect to the gradient norm, or absolutely (:absolute) constant.

Constructors

ConstantStepsize(s::Real, t::Symbol=:relative)

initialize the stepsize to a constant s of type t.

ConstantStepsize(
+W(t) = ⟨\operatorname{grad}f(x_+), \mathcal T_{p_+←p}X⟩_{p_+} ≥ c_2 ⟨X, \operatorname{grad}f(x)⟩_x,\]

where $p_+ =\operatorname{retr}_p(tX)$ is the current trial point, and $\mathcal T_{⋅←⋅}$ denotes a vector transport. Then the following Algorithm is performed similar to Algorithm 7 from [Hua14]

  1. set $α=0$, $β=∞$ and $t=1$.
  2. While either $A(t)$ does not hold or $W(t)$ does not hold do steps 3-5.
  3. If $A(t)$ fails, set $β=t$.
  4. If $A(t)$ holds but $W(t)$ fails, set $α=t$.
  5. If $β<∞$ set $t=\frac{α+β}{2}$, otherwise set $t=2α$.

Keyword arguments

source

Some step sizes use max_stepsize function as a rough upper estimate for the trust region size. It is by default equal to injectivity radius of the exponential map but in some cases a different value is used. For the FixedRankMatrices manifold an estimate from Manopt is used. Tangent bundle with the Sasaki metric has 0 injectivity radius, so the maximum stepsize of the underlying manifold is used instead. Hyperrectangle also has 0 injectivity radius and an estimate based on maximum of dimensions along each index is used instead. For manifolds with corners, however, a line search capable of handling break points along the projected search direction should be used, and such algorithms do not call max_stepsize.

Internally these step size functions create a ManifoldDefaultsFactory. Internally these use

Manopt.armijo_initial_guessMethod
armijo_initial_guess(mp::AbstractManoptProblem, s::AbstractManoptSolverState, k, l)

Input

Return an initial guess for the ArmijoLinesearchStepsize.

The default provided is based on the max_stepsize(M), which we denote by $m$. Let further $X$ be the current descent direction with norm $n=\lVert X \rVert_{p}$ its length. Then this (default) initial guess returns

  • $l$ if $m$ is not finite
  • $\min(l, \frac{m}{n})$ otherwise

This ensures that the initial guess does not yield to large (initial) steps.

source
Manopt.get_last_stepsizeMethod
get_last_stepsize(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, vars...)

return the last computed stepsize stored within AbstractManoptSolverState ams when solving the AbstractManoptProblem amp.

This method takes into account that ams might be decorated. In case this returns NaN, a concrete call to the stored stepsize is performed. For this, usually, the first of the vars... should be the current iterate.

source
Manopt.get_last_stepsizeMethod
get_last_stepsize(::Stepsize, vars...)

return the last computed stepsize from within the stepsize. If no last step size is stored, this returns NaN.

source
Manopt.linesearch_backtrackMethod
(s, msg) = linesearch_backtrack(M, F, p, X, s, decrease, contract η = -X, f0 = f(p); kwargs...)
+(s, msg) = linesearch_backtrack!(M, q, F, p, X, s, decrease, contract η = -X, f0 = f(p); kwargs...)

perform a line search

  • on manifold M
  • for the cost function f,
  • at the current point p
  • with current gradient provided in X
  • an initial stepsize s
  • a sufficient decrease
  • a contraction factor $σ$
  • a search direction $η = -X$
  • an offset, $f_0 = F(x)$

Keyword arguments

  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less=0.0: to avoid numerical underflow
  • stop_when_stepsize_exceeds=max_stepsize(M, p) / norm(M, p, η)) to avoid leaving the injectivity radius on a manifold
  • stop_increasing_at_step=100: stop the initial increase of step size after these many steps
  • stop_decreasing_at_step=1000`: stop the decreasing search after these many steps
  • additional_increase_condition=(M,p) -> true: impose an additional condition for an increased step size to be accepted
  • additional_decrease_condition=(M,p) -> true: impose an additional condition for an decreased step size to be accepted

These keywords are used as safeguards, where only the max stepsize is a very manifold specific one.

Return value

A stepsize s and a message msg (in case any of the 4 criteria hit)

source
Manopt.max_stepsizeMethod
max_stepsize(M::AbstractManifold, p)
+max_stepsize(M::AbstractManifold)

Get the maximum stepsize (at point p) on manifold M. It should be used to limit the distance an algorithm is trying to move in a single step.

By default, this returns injectivity_radius(M), if this exists. If this is not available on the the method returns Inf.

source
Manopt.AdaptiveWNGradientStepsizeType
AdaptiveWNGradientStepsize{I<:Integer,R<:Real,F<:Function} <: Stepsize

A functor problem, state, k, X) -> s to an adaptive gradient method introduced by [GrapigliaStella:2023](@cite). See [AdaptiveWNGradient`](@ref) for the mathematical details.

Fields

  • count_threshold::I: an Integer for $\hat{c}$
  • minimal_bound::R: the value for $b_{\text{min}}$
  • alternate_bound::F: how to determine $\hat{k}_k$ as a function of (bmin, bk, hat_c) -> hat_bk
  • gradient_reduction::R: the gradient reduction factor threshold $α ∈ [0,1)$
  • gradient_bound::R: the bound $b_k$.
  • weight::R: $ω_k$ initialised to $ω_0 =$norm(M, p, X) if this is not zero, 1.0 otherwise.
  • count::I: $c_k$, initialised to $c_0 = 0$.

Constructor

AdaptiveWNGrad(M::AbstractManifold; kwargs...)

Keyword arguments

  • adaptive=true: switches the gradient_reductionα(iftrue) to0`.
  • alternate_bound = (bk, hat_c) -> min(gradient_bound == 0 ? 1.0 : gradient_bound, max(minimal_bound, bk / (3 * hat_c))
  • count_threshold=4
  • gradient_reduction::R=adaptive ? 0.9 : 0.0
  • gradient_bound=norm(M, p, X)
  • minimal_bound=1e-4
  • p=rand(M): a point on the manifold $\mathcal M$only used to define the gradient_bound
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$only used to define the gradient_bound
source
Manopt.ArmijoLinesearchStepsizeType
ArmijoLinesearchStepsize <: Linesearch

A functor problem, state, k, X) -> s to provide an Armijo line search to compute step size, based on the search directionX`

Fields

  • candidate_point: to store an interim result
  • initial_stepsize: and initial step size
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • contraction_factor: exponent for line search reduction
  • sufficient_decrease: gain within Armijo's rule
  • last_stepsize: the last step size to start the search with
  • initial_guess: a function to provide an initial guess for the step size, it maps (m,p,k,l) -> α based on a AbstractManoptProblem p, AbstractManoptSolverState s, the current iterate k and a last step size l. It returns the initial guess α.
  • additional_decrease_condition: specify a condition a new point has to additionally fulfill. The default accepts all points.
  • additional_increase_condition: specify a condtion that additionally to checking a valid increase has to be fulfilled. The default accepts all points.
  • stop_when_stepsize_less: smallest stepsize when to stop (the last one before is taken)
  • stop_when_stepsize_exceeds: largest stepsize when to stop.
  • stop_increasing_at_step: last step to increase the stepsize (phase 1),
  • stop_decreasing_at_step: last step size to decrease the stepsize (phase 2),

Pass :Messages to a debug= to see @infos when these happen.

Constructor

ArmijoLinesearchStepsize(M::AbstractManifold; kwarg...)

with the fields keyword arguments and the retraction is set to the default retraction on M.

Keyword arguments

  • candidate_point=(allocate_result(M, rand))
  • initial_stepsize=1.0
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • contraction_factor=0.95
  • sufficient_decrease=0.1
  • last_stepsize=initialstepsize
  • initial_guess=armijo_initial_guess– (p,s,i,l) -> l
  • stop_when_stepsize_less=0.0: stop when the stepsize decreased below this version.
  • stop_when_stepsize_exceeds=[max_step](@ref)(M)`: provide an absolute maximal step size.
  • stop_increasing_at_step=100: for the initial increase test, stop after these many steps
  • stop_decreasing_at_step=1000: in the backtrack, stop after these many steps
source
Manopt.ConstantStepsizeType
ConstantStepsize <: Stepsize

A functor (problem, state, ...) -> s to provide a constant step size s.

Fields

  • length: constant value for the step size
  • type: a symbol that indicates whether the stepsize is relatively (:relative), with respect to the gradient norm, or absolutely (:absolute) constant.

Constructors

ConstantStepsize(s::Real, t::Symbol=:relative)

initialize the stepsize to a constant s of type t.

ConstantStepsize(
     M::AbstractManifold=DefaultManifold(),
     s=min(1.0, injectivity_radius(M)/2);
     type::Symbol=:relative
-)
source
Manopt.DecreasingStepsizeType
DecreasingStepsize()

A functor (problem, state, ...) -> s to provide a constant step size s.

Fields

  • exponent: a value $e$ the current iteration numbers $e$th exponential is taken of
  • factor: a value $f$ to multiply the initial step size with every iteration
  • length: the initial step size $l$.
  • subtrahend: a value $a$ that is subtracted every iteration
  • shift: shift the denominator iterator $i$ by $s$`.
  • type: a symbol that indicates whether the stepsize is relatively (:relative), with respect to the gradient norm, or absolutely (:absolute) constant.

In total the complete formulae reads for the $i$th iterate as

\[s_i = \frac{(l - i a)f^i}{(i+s)^e}\]

and hence the default simplifies to just $s_i = rac{l}{i}$

Constructor

DecreasingStepsize(M::AbstractManifold;
+)
source
Manopt.DecreasingStepsizeType
DecreasingStepsize()

A functor (problem, state, ...) -> s to provide a constant step size s.

Fields

  • exponent: a value $e$ the current iteration numbers $e$th exponential is taken of
  • factor: a value $f$ to multiply the initial step size with every iteration
  • length: the initial step size $l$.
  • subtrahend: a value $a$ that is subtracted every iteration
  • shift: shift the denominator iterator $i$ by $s$`.
  • type: a symbol that indicates whether the stepsize is relatively (:relative), with respect to the gradient norm, or absolutely (:absolute) constant.

In total the complete formulae reads for the $i$th iterate as

\[s_i = \frac{(l - i a)f^i}{(i+s)^e}\]

and hence the default simplifies to just $s_i = rac{l}{i}$

Constructor

DecreasingStepsize(M::AbstractManifold;
     length=min(injectivity_radius(M)/2, 1.0),
     factor=1.0,
     subtrahend=0.0,
     exponent=1.0,
     shift=0.0,
     type=:relative,
-)

initializes all fields, where none of them is mandatory and the length is set to half and to $1$ if the injectivity radius is infinite.

source
Manopt.LinesearchType
Linesearch <: Stepsize

An abstract functor to represent line search type step size determinations, see Stepsize for details. One example is the ArmijoLinesearchStepsize functor.

Compared to simple step sizes, the line search functors provide an interface of the form (p,o,i,X) -> s with an additional (but optional) fourth parameter to provide a search direction; this should default to something reasonable, most prominently the negative gradient.

source
Manopt.NonmonotoneLinesearchStepsizeType
NonmonotoneLinesearchStepsize{P,T,R<:Real} <: Linesearch

A functor representing a nonmonotone line search using the Barzilai-Borwein step size [IP17].

Fields

  • initial_stepsize=1.0: the step size to start the search with
  • memory_size=10: number of iterations after which the cost value needs to be lower than the current one
  • bb_min_stepsize=1e-3: lower bound for the Barzilai-Borwein step size greater than zero
  • bb_max_stepsize=1e3: upper bound for the Barzilai-Borwein step size greater than min_stepsize
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • strategy=direct: defines if the new step size is computed using the :direct, :indirect or :alternating strategy
  • storage: (for :Iterate and :Gradient) a StoreStateAction
  • stepsize_reduction: step size reduction factor contained in the interval (0,1)
  • sufficient_decrease: sufficient decrease parameter contained in the interval (0,1)
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • candidate_point: to store an interim result
  • stop_when_stepsize_less: smallest stepsize when to stop (the last one before is taken)
  • stop_when_stepsize_exceeds: largest stepsize when to stop.
  • stop_increasing_at_step: last step to increase the stepsize (phase 1),
  • stop_decreasing_at_step: last step size to decrease the stepsize (phase 2),

Constructor

NonmonotoneLinesearchStepsize(M::AbstractManifold; kwargs...)

Keyword arguments

  • p=allocate_result(M, rand): to store an interim result
  • initial_stepsize=1.0
  • memory_size=10
  • bb_min_stepsize=1e-3
  • bb_max_stepsize=1e3
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • strategy=direct
  • storage=[StoreStateAction](@ref)(M; store_fields=[:Iterate, :Gradient])``
  • stepsize_reduction=0.5
  • sufficient_decrease=1e-4
  • stop_when_stepsize_less=0.0
  • stop_when_stepsize_exceeds=max_stepsize(M, p))
  • stop_increasing_at_step=100
  • stop_decreasing_at_step=1000
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
source
Manopt.PolyakStepsizeType
PolyakStepsize <: Stepsize

A functor (problem, state, ...) -> s to provide a step size due to Polyak, cf. Section 3.2 of [Ber15].

Fields

  • γ : a function k -> ... representing a seuqnce.
  • best_cost_value : storing the best cost value

Constructor

PolyakStepsize(;
+)

initializes all fields, where none of them is mandatory and the length is set to half and to $1$ if the injectivity radius is infinite.

source
Manopt.LinesearchType
Linesearch <: Stepsize

An abstract functor to represent line search type step size determinations, see Stepsize for details. One example is the ArmijoLinesearchStepsize functor.

Compared to simple step sizes, the line search functors provide an interface of the form (p,o,i,X) -> s with an additional (but optional) fourth parameter to provide a search direction; this should default to something reasonable, most prominently the negative gradient.

source
Manopt.NonmonotoneLinesearchStepsizeType
NonmonotoneLinesearchStepsize{P,T,R<:Real} <: Linesearch

A functor representing a nonmonotone line search using the Barzilai-Borwein step size [IP17].

Fields

  • initial_stepsize=1.0: the step size to start the search with
  • memory_size=10: number of iterations after which the cost value needs to be lower than the current one
  • bb_min_stepsize=1e-3: lower bound for the Barzilai-Borwein step size greater than zero
  • bb_max_stepsize=1e3: upper bound for the Barzilai-Borwein step size greater than min_stepsize
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • strategy=direct: defines if the new step size is computed using the :direct, :indirect or :alternating strategy
  • storage: (for :Iterate and :Gradient) a StoreStateAction
  • stepsize_reduction: step size reduction factor contained in the interval (0,1)
  • sufficient_decrease: sufficient decrease parameter contained in the interval (0,1)
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • candidate_point: to store an interim result
  • stop_when_stepsize_less: smallest stepsize when to stop (the last one before is taken)
  • stop_when_stepsize_exceeds: largest stepsize when to stop.
  • stop_increasing_at_step: last step to increase the stepsize (phase 1),
  • stop_decreasing_at_step: last step size to decrease the stepsize (phase 2),

Constructor

NonmonotoneLinesearchStepsize(M::AbstractManifold; kwargs...)

Keyword arguments

  • p=allocate_result(M, rand): to store an interim result
  • initial_stepsize=1.0
  • memory_size=10
  • bb_min_stepsize=1e-3
  • bb_max_stepsize=1e3
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • strategy=direct
  • storage=[StoreStateAction](@ref)(M; store_fields=[:Iterate, :Gradient])``
  • stepsize_reduction=0.5
  • sufficient_decrease=1e-4
  • stop_when_stepsize_less=0.0
  • stop_when_stepsize_exceeds=max_stepsize(M, p))
  • stop_increasing_at_step=100
  • stop_decreasing_at_step=1000
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
source
Manopt.PolyakStepsizeType
PolyakStepsize <: Stepsize

A functor (problem, state, ...) -> s to provide a step size due to Polyak, cf. Section 3.2 of [Ber15].

Fields

  • γ : a function k -> ... representing a seuqnce.
  • best_cost_value : storing the best cost value

Constructor

PolyakStepsize(;
     γ = i -> 1/i,
     initial_cost_estimate=0.0
-)

Construct a stepsize of Polyak type.

See also

Polyak

source
Manopt.WolfePowellBinaryLinesearchStepsizeType
WolfePowellBinaryLinesearchStepsize{R} <: Linesearch

Do a backtracking line search to find a step size $α$ that fulfils the Wolfe conditions along a search direction $X$ starting from $p$. See WolfePowellBinaryLinesearch for the math details.

Fields

  • sufficient_decrease::R, sufficient_curvature::R two constants in the line search
  • last_stepsize::R
  • max_stepsize::R
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less::R: a safeguard to stop when the stepsize gets too small
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Keyword arguments

source
Manopt.WolfePowellLinesearchStepsizeType
WolfePowellLinesearchStepsize{R<:Real} <: Linesearch

Do a backtracking line search to find a step size $α$ that fulfils the Wolfe conditions along a search direction $X$ starting from $p$. See WolfePowellLinesearch for the math details

Fields

  • sufficient_decrease::R, sufficient_curvature::R two constants in the line search
  • candidate_direction::T: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • candidate_point::P: a point on the manifold $\mathcal M$as temporary storage for candidates
  • candidate_tangent::T: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • last_stepsize::R
  • max_stepsize::R
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less::R: a safeguard to stop when the stepsize gets too small
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Keyword arguments

  • sufficient_decrease=10^(-4)
  • sufficient_curvature=0.999
  • p::P: a point on the manifold $\mathcal M$as temporary storage for candidates
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$as type of memory allocated for the candidates direction and tangent
  • max_stepsize=max_stepsize(M, p): largest stepsize allowed here.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
source

Some solvers have a different iterate from the one used for the line search. Then the following state can be used to wrap these locally

Manopt.StepsizeStateType
StepsizeState{P,T} <: AbstractManoptSolverState

A state to store a point and a descent direction used within a linesearch, if these are different from the iterate and search direction of the main solver.

Fields

  • p::P: a point on a manifold
  • X::T: a tangent vector at p.

Constructor

StepsizeState(p,X)
-StepsizeState(M::AbstractManifold; p=rand(M), x=zero_vector(M,p)

See also

interior_point_Newton

source

Literature

[Ber15]
D. P. Bertsekas. Convex Optimization Algorithms (Athena Scientific, 2015); p. 576.
[Bou23]
[GS23]
[Hua14]
W. Huang. Optimization algorithms on Riemannian manifolds with applications. Ph.D. Thesis, Flordia State University (2014).
[IP17]
B. Iannazzo and M. Porcelli. The Riemannian Barzilai–Borwein method with nonmonotone line search and the matrix geometric mean computation. IMA Journal of Numerical Analysis 38, 495–517 (2017).
[NW06]
J. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).
+)

Construct a stepsize of Polyak type.

See also

Polyak

source
Manopt.WolfePowellBinaryLinesearchStepsizeType
WolfePowellBinaryLinesearchStepsize{R} <: Linesearch

Do a backtracking line search to find a step size $α$ that fulfils the Wolfe conditions along a search direction $X$ starting from $p$. See WolfePowellBinaryLinesearch for the math details.

Fields

  • sufficient_decrease::R, sufficient_curvature::R two constants in the line search
  • last_stepsize::R
  • max_stepsize::R
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less::R: a safeguard to stop when the stepsize gets too small
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Keyword arguments

source
Manopt.WolfePowellLinesearchStepsizeType
WolfePowellLinesearchStepsize{R<:Real} <: Linesearch

Do a backtracking line search to find a step size $α$ that fulfils the Wolfe conditions along a search direction $X$ starting from $p$. See WolfePowellLinesearch for the math details

Fields

  • sufficient_decrease::R, sufficient_curvature::R two constants in the line search
  • candidate_direction::T: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • candidate_point::P: a point on the manifold $\mathcal M$as temporary storage for candidates
  • candidate_tangent::T: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • last_stepsize::R
  • max_stepsize::R
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less::R: a safeguard to stop when the stepsize gets too small
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Keyword arguments

  • sufficient_decrease=10^(-4)
  • sufficient_curvature=0.999
  • p::P: a point on the manifold $\mathcal M$as temporary storage for candidates
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$as type of memory allocated for the candidates direction and tangent
  • max_stepsize=max_stepsize(M, p): largest stepsize allowed here.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
source

Some solvers have a different iterate from the one used for the line search. Then the following state can be used to wrap these locally

Manopt.StepsizeStateType
StepsizeState{P,T} <: AbstractManoptSolverState

A state to store a point and a descent direction used within a linesearch, if these are different from the iterate and search direction of the main solver.

Fields

  • p::P: a point on a manifold
  • X::T: a tangent vector at p.

Constructor

StepsizeState(p,X)
+StepsizeState(M::AbstractManifold; p=rand(M), x=zero_vector(M,p)

See also

interior_point_Newton

source

Literature

[Ber15]
D. P. Bertsekas. Convex Optimization Algorithms (Athena Scientific, 2015); p. 576.
[Bou23]
[GS23]
[Hua14]
W. Huang. Optimization algorithms on Riemannian manifolds with applications. Ph.D. Thesis, Flordia State University (2014).
[IP17]
B. Iannazzo and M. Porcelli. The Riemannian Barzilai–Borwein method with nonmonotone line search and the matrix geometric mean computation. IMA Journal of Numerical Analysis 38, 495–517 (2017).
[NW06]
J. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).
diff --git a/dev/plans/stopping_criteria/index.html b/dev/plans/stopping_criteria/index.html index 2dfb2ba6cb..5339b81aae 100644 --- a/dev/plans/stopping_criteria/index.html +++ b/dev/plans/stopping_criteria/index.html @@ -1,27 +1,27 @@ -Stopping Criteria · Manopt.jl

Stopping criteria

Stopping criteria are implemented as a functor and inherit from the base type

Manopt.StoppingCriterionType
StoppingCriterion

An abstract type for the functors representing stopping criteria, so they are callable structures. The naming Scheme follows functions, see for example StopAfterIteration.

Every StoppingCriterion has to provide a constructor and its function has to have the interface (p,o,i) where a AbstractManoptProblem as well as AbstractManoptSolverState and the current number of iterations are the arguments and returns a boolean whether to stop or not.

By default each StoppingCriterion should provide a fields reason to provide details when a criterion is met (and that is empty otherwise).

source

They can also be grouped, which is summarized in the type of a set of criteria

Manopt.StoppingCriterionSetType
StoppingCriterionGroup <: StoppingCriterion

An abstract type for a Stopping Criterion that itself consists of a set of Stopping criteria. In total it acts as a stopping criterion itself. Examples are StopWhenAny and StopWhenAll that can be used to combine stopping criteria.

source

The stopping criteria s might have certain internal values/fields it uses to verify against. This is done when calling them as a function s(amp::AbstractManoptProblem, ams::AbstractManoptSolverState), where the AbstractManoptProblem and the AbstractManoptSolverState together represent the current state of the solver. The functor returns either false when the stopping criterion is not fulfilled or true otherwise. One field all criteria should have is the s.at_iteration, to indicate at which iteration the stopping criterion (last) indicated to stop. 0 refers to an indication before starting the algorithm, while any negative number meant the stopping criterion is not (yet) fulfilled. To can access a string giving the reason of stopping see get_reason.

Generic stopping criteria

The following generic stopping criteria are available. Some require that, for example, the corresponding AbstractManoptSolverState have a field gradient when the criterion should access that.

Further stopping criteria might be available for individual solvers.

Manopt.StopAfterType
StopAfter <: StoppingCriterion

store a threshold when to stop looking at the complete runtime. It uses time_ns() to measure the time and you provide a Period as a time limit, for example Minute(15).

Fields

  • threshold stores the Period after which to stop
  • start stores the starting time when the algorithm is started, that is a call with i=0.
  • time stores the elapsed time
  • at_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.

Constructor

StopAfter(t)

initialize the stopping criterion to a Period t to stop after.

source
Manopt.StopAfterIterationType
StopAfterIteration <: StoppingCriterion

A functor for a stopping criterion to stop after a maximal number of iterations.

Fields

  • max_iterations stores the maximal iteration number where to stop at
  • at_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.

Constructor

StopAfterIteration(maxIter)

initialize the functor to indicate to stop after maxIter iterations.

source
Manopt.StopWhenAllType
StopWhenAll <: StoppingCriterionSet

store an array of StoppingCriterion elements and indicates to stop, when all indicate to stop. The reason is given by the concatenation of all reasons.

Constructor

StopWhenAll(c::NTuple{N,StoppingCriterion} where N)
-StopWhenAll(c::StoppingCriterion,...)
source
Manopt.StopWhenAnyType
StopWhenAny <: StoppingCriterionSet

store an array of StoppingCriterion elements and indicates to stop, when any single one indicates to stop. The reason is given by the concatenation of all reasons (assuming that all non-indicating return "").

Constructor

StopWhenAny(c::NTuple{N,StoppingCriterion} where N)
-StopWhenAny(c::StoppingCriterion...)
source
Manopt.StopWhenChangeLessType
StopWhenChangeLess <: StoppingCriterion

stores a threshold when to stop looking at the norm of the change of the optimization variable from within a AbstractManoptSolverState s. That ism by accessing get_iterate(s) and comparing successive iterates. For the storage a StoreStateAction is used.

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • last_change::Real: the last change recorded in this stopping criterion
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • storage::StoreStateAction: a storage to access the previous iterate
  • at_iteration::Int: indicate at which iteration this stopping criterion was last active.
  • inverse_retraction: An AbstractInverseRetractionMethod that can be passed to approximate the distance by this inverse retraction and a norm on the tangent space. This can be used if neither the distance nor the logarithmic map are availannle on M.
  • last_change: store the last change
  • storage: A StoreStateAction to access the previous iterate.
  • threshold: the threshold for the change to check (run under to stop)
  • outer_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance. You can deactivate this, but setting this value to missing.

Example

On an AbstractPowerManifold like $\mathcal M = \mathcal N^n$ any point $p = (p_1,…,p_n) ∈ \mathcal M$ is a vector of length $n$ with of points $p_i ∈ \mathcal N$. Then, denoting the outer_norm by $r$, the distance of two points $p,q ∈ \mathcal M$ is given by

\mathrm{d}(p,q) = \Bigl( \sum_{k=1}^n \mathrm{d}(p_k,q_k)^r \Bigr)^{\frac{1}{r}},

where the sum turns into a maximum for the case $r=∞$. The outer_norm has no effect on manifolds that do not consist of components.

If the manifold does not have components, the outer norm is ignored.

Constructor

StopWhenChangeLess(
+Stopping Criteria · Manopt.jl

Stopping criteria

Stopping criteria are implemented as a functor and inherit from the base type

Manopt.StoppingCriterionType
StoppingCriterion

An abstract type for the functors representing stopping criteria, so they are callable structures. The naming Scheme follows functions, see for example StopAfterIteration.

Every StoppingCriterion has to provide a constructor and its function has to have the interface (p,o,i) where a AbstractManoptProblem as well as AbstractManoptSolverState and the current number of iterations are the arguments and returns a boolean whether to stop or not.

By default each StoppingCriterion should provide a fields reason to provide details when a criterion is met (and that is empty otherwise).

source

They can also be grouped, which is summarized in the type of a set of criteria

Manopt.StoppingCriterionSetType
StoppingCriterionGroup <: StoppingCriterion

An abstract type for a Stopping Criterion that itself consists of a set of Stopping criteria. In total it acts as a stopping criterion itself. Examples are StopWhenAny and StopWhenAll that can be used to combine stopping criteria.

source

The stopping criteria s might have certain internal values/fields it uses to verify against. This is done when calling them as a function s(amp::AbstractManoptProblem, ams::AbstractManoptSolverState), where the AbstractManoptProblem and the AbstractManoptSolverState together represent the current state of the solver. The functor returns either false when the stopping criterion is not fulfilled or true otherwise. One field all criteria should have is the s.at_iteration, to indicate at which iteration the stopping criterion (last) indicated to stop. 0 refers to an indication before starting the algorithm, while any negative number meant the stopping criterion is not (yet) fulfilled. To can access a string giving the reason of stopping see get_reason.

Generic stopping criteria

The following generic stopping criteria are available. Some require that, for example, the corresponding AbstractManoptSolverState have a field gradient when the criterion should access that.

Further stopping criteria might be available for individual solvers.

Manopt.StopAfterType
StopAfter <: StoppingCriterion

store a threshold when to stop looking at the complete runtime. It uses time_ns() to measure the time and you provide a Period as a time limit, for example Minute(15).

Fields

  • threshold stores the Period after which to stop
  • start stores the starting time when the algorithm is started, that is a call with i=0.
  • time stores the elapsed time
  • at_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.

Constructor

StopAfter(t)

initialize the stopping criterion to a Period t to stop after.

source
Manopt.StopAfterIterationType
StopAfterIteration <: StoppingCriterion

A functor for a stopping criterion to stop after a maximal number of iterations.

Fields

  • max_iterations stores the maximal iteration number where to stop at
  • at_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.

Constructor

StopAfterIteration(maxIter)

initialize the functor to indicate to stop after maxIter iterations.

source
Manopt.StopWhenAllType
StopWhenAll <: StoppingCriterionSet

store an array of StoppingCriterion elements and indicates to stop, when all indicate to stop. The reason is given by the concatenation of all reasons.

Constructor

StopWhenAll(c::NTuple{N,StoppingCriterion} where N)
+StopWhenAll(c::StoppingCriterion,...)
source
Manopt.StopWhenAnyType
StopWhenAny <: StoppingCriterionSet

store an array of StoppingCriterion elements and indicates to stop, when any single one indicates to stop. The reason is given by the concatenation of all reasons (assuming that all non-indicating return "").

Constructor

StopWhenAny(c::NTuple{N,StoppingCriterion} where N)
+StopWhenAny(c::StoppingCriterion...)
source
Manopt.StopWhenChangeLessType
StopWhenChangeLess <: StoppingCriterion

stores a threshold when to stop looking at the norm of the change of the optimization variable from within a AbstractManoptSolverState s. That ism by accessing get_iterate(s) and comparing successive iterates. For the storage a StoreStateAction is used.

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • last_change::Real: the last change recorded in this stopping criterion
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • storage::StoreStateAction: a storage to access the previous iterate
  • at_iteration::Int: indicate at which iteration this stopping criterion was last active.
  • inverse_retraction: An AbstractInverseRetractionMethod that can be passed to approximate the distance by this inverse retraction and a norm on the tangent space. This can be used if neither the distance nor the logarithmic map are availannle on M.
  • last_change: store the last change
  • storage: A StoreStateAction to access the previous iterate.
  • threshold: the threshold for the change to check (run under to stop)
  • outer_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance. You can deactivate this, but setting this value to missing.

Example

On an AbstractPowerManifold like $\mathcal M = \mathcal N^n$ any point $p = (p_1,…,p_n) ∈ \mathcal M$ is a vector of length $n$ with of points $p_i ∈ \mathcal N$. Then, denoting the outer_norm by $r$, the distance of two points $p,q ∈ \mathcal M$ is given by

\mathrm{d}(p,q) = \Bigl( \sum_{k=1}^n \mathrm{d}(p_k,q_k)^r \Bigr)^{\frac{1}{r}},

where the sum turns into a maximum for the case $r=∞$. The outer_norm has no effect on manifolds that do not consist of components.

If the manifold does not have components, the outer norm is ignored.

Constructor

StopWhenChangeLess(
     M::AbstractManifold,
     threshold::Float64;
     storage::StoreStateAction=StoreStateAction([:Iterate]),
     inverse_retraction_method::IRT=default_inverse_retraction_method(M)
     outer_norm::Union{Missing,Real}=missing
-)

initialize the stopping criterion to a threshold ε using the StoreStateAction a, which is initialized to just store :Iterate by default. You can also provide an inverseretractionmethod for the distance or a manifold to use its default inverse retraction.

source
Manopt.StopWhenCostLessType
StopWhenCostLess <: StoppingCriterion

store a threshold when to stop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).

Constructor

StopWhenCostLess(ε)

initialize the stopping criterion to a threshold ε.

source
Manopt.StopWhenCostNaNType
StopWhenCostNaN <: StoppingCriterion

stop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).

Constructor

StopWhenCostNaN()

initialize the stopping criterion to NaN.

source
Manopt.StopWhenEntryChangeLessType
StopWhenEntryChangeLess

Evaluate whether a certain fields change is less than a certain threshold

Fields

  • field: a symbol addressing the corresponding field in a certain subtype of AbstractManoptSolverState to track
  • distance: a function (problem, state, v1, v2) -> R that computes the distance between two possible values of the field
  • storage: a StoreStateAction to store the previous value of the field
  • threshold: the threshold to indicate to stop when the distance is below this value

Internal fields

  • at_iteration: store the iteration at which the stop indication happened

stores a threshold when to stop looking at the norm of the change of the optimization variable from within a AbstractManoptSolverState, i.e get_iterate(o). For the storage a StoreStateAction is used

Constructor

StopWhenEntryChangeLess(
+)

initialize the stopping criterion to a threshold ε using the StoreStateAction a, which is initialized to just store :Iterate by default. You can also provide an inverseretractionmethod for the distance or a manifold to use its default inverse retraction.

source
Manopt.StopWhenCostLessType
StopWhenCostLess <: StoppingCriterion

store a threshold when to stop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).

Constructor

StopWhenCostLess(ε)

initialize the stopping criterion to a threshold ε.

source
Manopt.StopWhenCostNaNType
StopWhenCostNaN <: StoppingCriterion

stop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).

Constructor

StopWhenCostNaN()

initialize the stopping criterion to NaN.

source
Manopt.StopWhenEntryChangeLessType
StopWhenEntryChangeLess

Evaluate whether a certain fields change is less than a certain threshold

Fields

  • field: a symbol addressing the corresponding field in a certain subtype of AbstractManoptSolverState to track
  • distance: a function (problem, state, v1, v2) -> R that computes the distance between two possible values of the field
  • storage: a StoreStateAction to store the previous value of the field
  • threshold: the threshold to indicate to stop when the distance is below this value

Internal fields

  • at_iteration: store the iteration at which the stop indication happened

stores a threshold when to stop looking at the norm of the change of the optimization variable from within a AbstractManoptSolverState, i.e get_iterate(o). For the storage a StoreStateAction is used

Constructor

StopWhenEntryChangeLess(
     field::Symbol
     distance,
     threshold;
     storage::StoreStateAction=StoreStateAction([field]),
-)
source
Manopt.StopWhenGradientChangeLessType
StopWhenGradientChangeLess <: StoppingCriterion

A stopping criterion based on the change of the gradient.

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • last_change::Real: the last change recorded in this stopping criterion
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • storage::StoreStateAction: a storage to access the previous iterate
  • threshold: the threshold for the change to check (run under to stop)
  • outer_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance. You can deactivate this, but setting this value to missing.

Example

On an AbstractPowerManifold like $\mathcal M = \mathcal N^n$ any point $p = (p_1,…,p_n) ∈ \mathcal M$ is a vector of length $n$ with of points $p_i ∈ \mathcal N$. Then, denoting the outer_norm by $r$, the norm of the difference of tangent vectors like the last and current gradien $X,Y ∈ \mathcal M$ is given by

\lVert X-Y \rVert_{p} = \Bigl( \sum_{k=1}^n \lVert X_k-Y_k \rVert_{p_k}^r \Bigr)^{\frac{1}{r}},

where the sum turns into a maximum for the case $r=∞$. The outer_norm has no effect on manifols, that do not consist of components.

Constructor

StopWhenGradientChangeLess(
+)
source
Manopt.StopWhenGradientChangeLessType
StopWhenGradientChangeLess <: StoppingCriterion

A stopping criterion based on the change of the gradient.

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • last_change::Real: the last change recorded in this stopping criterion
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • storage::StoreStateAction: a storage to access the previous iterate
  • threshold: the threshold for the change to check (run under to stop)
  • outer_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance. You can deactivate this, but setting this value to missing.

Example

On an AbstractPowerManifold like $\mathcal M = \mathcal N^n$ any point $p = (p_1,…,p_n) ∈ \mathcal M$ is a vector of length $n$ with of points $p_i ∈ \mathcal N$. Then, denoting the outer_norm by $r$, the norm of the difference of tangent vectors like the last and current gradien $X,Y ∈ \mathcal M$ is given by

\lVert X-Y \rVert_{p} = \Bigl( \sum_{k=1}^n \lVert X_k-Y_k \rVert_{p_k}^r \Bigr)^{\frac{1}{r}},

where the sum turns into a maximum for the case $r=∞$. The outer_norm has no effect on manifols, that do not consist of components.

Constructor

StopWhenGradientChangeLess(
     M::AbstractManifold,
     ε::Float64;
     storage::StoreStateAction=StoreStateAction([:Iterate]),
     vector_transport_method::IRT=default_vector_transport_method(M),
     outer_norm::N=missing
-)

Create a stopping criterion with threshold ε for the change gradient, that is, this criterion indicates to stop when get_gradient is in (norm of) its change less than ε, where vector_transport_method denotes the vector transport $\mathcal T$ used.

source
Manopt.StopWhenGradientNormLessType
StopWhenGradientNormLess <: StoppingCriterion

A stopping criterion based on the current gradient norm.

Fields

  • norm: a function (M::AbstractManifold, p, X) -> ℝ that computes a norm of the gradient X in the tangent space at p on M. For manifolds with components provide(M::AbstractManifold, p, X, r) -> ℝ`.
  • threshold: the threshold to indicate to stop when the distance is below this value
  • outer_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance.

Internal fields

  • last_change store the last change
  • at_iteration store the iteration at which the stop indication happened

Example

On an AbstractPowerManifold like $\mathcal M = \mathcal N^n$ any point $p = (p_1,…,p_n) ∈ \mathcal M$ is a vector of length $n$ with of points $p_i ∈ \mathcal N$. Then, denoting the outer_norm by $r$, the norm of a tangent vector like the current gradient $X ∈ \mathcal M$ is given by

\lVert X \rVert_{p} = \Bigl( \sum_{k=1}^n \lVert X_k \rVert_{p_k}^r \Bigr)^{\frac{1}{r}},

where the sum turns into a maximum for the case $r=∞$. The outer_norm has no effect on manifolds that do not consist of components.

If you pass in your individual norm, this can be deactivated on such manifolds by passing missing to outer_norm.

Constructor

StopWhenGradientNormLess(ε; norm=ManifoldsBase.norm, outer_norm=missing)

Create a stopping criterion with threshold ε for the gradient, that is, this criterion indicates to stop when get_gradient returns a gradient vector of norm less than ε, where the norm to use can be specified in the norm= keyword.

source
Manopt.StopWhenIterateNaNType
StopWhenIterateNaN <: StoppingCriterion

stop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).

Constructor

StopWhenIterateNaN()

initialize the stopping criterion to NaN.

source
Manopt.StopWhenSmallerOrEqualType
StopWhenSmallerOrEqual <: StoppingCriterion

A functor for an stopping criterion, where the algorithm if stopped when a variable is smaller than or equal to its minimum value.

Fields

  • value stores the variable which has to fall under a threshold for the algorithm to stop
  • minValue stores the threshold where, if the value is smaller or equal to this threshold, the algorithm stops

Constructor

StopWhenSmallerOrEqual(value, minValue)

initialize the functor to indicate to stop after value is smaller than or equal to minValue.

source
Manopt.StopWhenStepsizeLessType
StopWhenStepsizeLess <: StoppingCriterion

stores a threshold when to stop looking at the last step size determined or found during the last iteration from within a AbstractManoptSolverState.

Constructor

StopWhenStepsizeLess(ε)

initialize the stopping criterion to a threshold ε.

source
Manopt.StopWhenSubgradientNormLessType
StopWhenSubgradientNormLess <: StoppingCriterion

A stopping criterion based on the current subgradient norm.

Constructor

StopWhenSubgradientNormLess(ε::Float64)

Create a stopping criterion with threshold ε for the subgradient, that is, this criterion indicates to stop when get_subgradient returns a subgradient vector of norm less than ε.

source

Functions for stopping criteria

There are a few functions to update, combine, and modify stopping criteria, especially to update internal values even for stopping criteria already being used within an AbstractManoptSolverState structure.

Base.:&Method
&(s1,s2)
+)

Create a stopping criterion with threshold ε for the change gradient, that is, this criterion indicates to stop when get_gradient is in (norm of) its change less than ε, where vector_transport_method denotes the vector transport $\mathcal T$ used.

source
Manopt.StopWhenGradientNormLessType
StopWhenGradientNormLess <: StoppingCriterion

A stopping criterion based on the current gradient norm.

Fields

  • norm: a function (M::AbstractManifold, p, X) -> ℝ that computes a norm of the gradient X in the tangent space at p on M. For manifolds with components provide(M::AbstractManifold, p, X, r) -> ℝ`.
  • threshold: the threshold to indicate to stop when the distance is below this value
  • outer_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance.

Internal fields

  • last_change store the last change
  • at_iteration store the iteration at which the stop indication happened

Example

On an AbstractPowerManifold like $\mathcal M = \mathcal N^n$ any point $p = (p_1,…,p_n) ∈ \mathcal M$ is a vector of length $n$ with of points $p_i ∈ \mathcal N$. Then, denoting the outer_norm by $r$, the norm of a tangent vector like the current gradient $X ∈ \mathcal M$ is given by

\lVert X \rVert_{p} = \Bigl( \sum_{k=1}^n \lVert X_k \rVert_{p_k}^r \Bigr)^{\frac{1}{r}},

where the sum turns into a maximum for the case $r=∞$. The outer_norm has no effect on manifolds that do not consist of components.

If you pass in your individual norm, this can be deactivated on such manifolds by passing missing to outer_norm.

Constructor

StopWhenGradientNormLess(ε; norm=ManifoldsBase.norm, outer_norm=missing)

Create a stopping criterion with threshold ε for the gradient, that is, this criterion indicates to stop when get_gradient returns a gradient vector of norm less than ε, where the norm to use can be specified in the norm= keyword.

source
Manopt.StopWhenIterateNaNType
StopWhenIterateNaN <: StoppingCriterion

stop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).

Constructor

StopWhenIterateNaN()

initialize the stopping criterion to NaN.

source
Manopt.StopWhenSmallerOrEqualType
StopWhenSmallerOrEqual <: StoppingCriterion

A functor for an stopping criterion, where the algorithm if stopped when a variable is smaller than or equal to its minimum value.

Fields

  • value stores the variable which has to fall under a threshold for the algorithm to stop
  • minValue stores the threshold where, if the value is smaller or equal to this threshold, the algorithm stops

Constructor

StopWhenSmallerOrEqual(value, minValue)

initialize the functor to indicate to stop after value is smaller than or equal to minValue.

source
Manopt.StopWhenStepsizeLessType
StopWhenStepsizeLess <: StoppingCriterion

stores a threshold when to stop looking at the last step size determined or found during the last iteration from within a AbstractManoptSolverState.

Constructor

StopWhenStepsizeLess(ε)

initialize the stopping criterion to a threshold ε.

source
Manopt.StopWhenSubgradientNormLessType
StopWhenSubgradientNormLess <: StoppingCriterion

A stopping criterion based on the current subgradient norm.

Constructor

StopWhenSubgradientNormLess(ε::Float64)

Create a stopping criterion with threshold ε for the subgradient, that is, this criterion indicates to stop when get_subgradient returns a subgradient vector of norm less than ε.

source

Functions for stopping criteria

There are a few functions to update, combine, and modify stopping criteria, especially to update internal values even for stopping criteria already being used within an AbstractManoptSolverState structure.

Base.:&Method
&(s1,s2)
 s1 & s2

Combine two StoppingCriterion within an StopWhenAll. If either s1 (or s2) is already an StopWhenAll, then s2 (or s1) is appended to the list of StoppingCriterion within s1 (or s2).

Example

a = StopAfterIteration(200) & StopWhenChangeLess(M, 1e-6)
 b = a & StopWhenGradientNormLess(1e-6)

Is the same as

a = StopWhenAll(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6))
-b = StopWhenAll(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6), StopWhenGradientNormLess(1e-6))
source
Base.:|Method
|(s1,s2)
+b = StopWhenAll(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6), StopWhenGradientNormLess(1e-6))
source
Base.:|Method
|(s1,s2)
 s1 | s2

Combine two StoppingCriterion within an StopWhenAny. If either s1 (or s2) is already an StopWhenAny, then s2 (or s1) is appended to the list of StoppingCriterion within s1 (or s2)

Example

a = StopAfterIteration(200) | StopWhenChangeLess(M, 1e-6)
 b = a | StopWhenGradientNormLess(1e-6)

Is the same as

a = StopWhenAny(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6))
-b = StopWhenAny(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6), StopWhenGradientNormLess(1e-6))
source
Manopt.get_active_stopping_criteriaMethod
get_active_stopping_criteria(c)

returns all active stopping criteria, if any, that are within a StoppingCriterion c, and indicated a stop, that is their reason is nonempty. To be precise for a simple stopping criterion, this returns either an empty array if no stop is indicated or the stopping criterion as the only element of an array. For a StoppingCriterionSet all internal (even nested) criteria that indicate to stop are returned.

source
Manopt.indicates_convergenceMethod
indicates_convergence(c::StoppingCriterion)

Return whether (true) or not (false) a StoppingCriterion does always mean that, when it indicates to stop, the solver has converged to a minimizer or critical point.

Note that this is independent of the actual state of the stopping criterion, whether some of them indicate to stop, but a purely type-based, static decision.

Examples

With s1=StopAfterIteration(20) and s2=StopWhenGradientNormLess(1e-7) the indicator yields

  • indicates_convergence(s1) is false
  • indicates_convergence(s2) is true
  • indicates_convergence(s1 | s2) is false, since this might also stop after 20 iterations
  • indicates_convergence(s1 & s2) is true, since s2 is fulfilled if this stops.
source
Manopt.set_parameter!Method
set_parameter!(c::StopAfter, :MaxTime, v::Period)

Update the time period after which an algorithm shall stop.

source
Manopt.set_parameter!Method
set_parameter!(c::StopAfterIteration, :;MaxIteration, v::Int)

Update the number of iterations after which the algorithm should stop.

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenChangeLess, :MinIterateChange, v::Int)

Update the minimal change below which an algorithm shall stop.

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenCostLess, :MinCost, v)

Update the minimal cost below which the algorithm shall stop

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenEntryChangeLess, :Threshold, v)

Update the minimal cost below which the algorithm shall stop

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenGradientChangeLess, :MinGradientChange, v)

Update the minimal change below which an algorithm shall stop.

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenGradientNormLess, :MinGradNorm, v::Float64)

Update the minimal gradient norm when an algorithm shall stop

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenStepsizeLess, :MinStepsize, v)

Update the minimal step size below which the algorithm shall stop

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenSubgradientNormLess, :MinSubgradNorm, v::Float64)

Update the minimal subgradient norm when an algorithm shall stop

source
+b = StopWhenAny(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6), StopWhenGradientNormLess(1e-6))
source
Manopt.get_active_stopping_criteriaMethod
get_active_stopping_criteria(c)

returns all active stopping criteria, if any, that are within a StoppingCriterion c, and indicated a stop, that is their reason is nonempty. To be precise for a simple stopping criterion, this returns either an empty array if no stop is indicated or the stopping criterion as the only element of an array. For a StoppingCriterionSet all internal (even nested) criteria that indicate to stop are returned.

source
Manopt.indicates_convergenceMethod
indicates_convergence(c::StoppingCriterion)

Return whether (true) or not (false) a StoppingCriterion does always mean that, when it indicates to stop, the solver has converged to a minimizer or critical point.

Note that this is independent of the actual state of the stopping criterion, whether some of them indicate to stop, but a purely type-based, static decision.

Examples

With s1=StopAfterIteration(20) and s2=StopWhenGradientNormLess(1e-7) the indicator yields

  • indicates_convergence(s1) is false
  • indicates_convergence(s2) is true
  • indicates_convergence(s1 | s2) is false, since this might also stop after 20 iterations
  • indicates_convergence(s1 & s2) is true, since s2 is fulfilled if this stops.
source
Manopt.set_parameter!Method
set_parameter!(c::StopAfter, :MaxTime, v::Period)

Update the time period after which an algorithm shall stop.

source
Manopt.set_parameter!Method
set_parameter!(c::StopAfterIteration, :;MaxIteration, v::Int)

Update the number of iterations after which the algorithm should stop.

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenChangeLess, :MinIterateChange, v::Int)

Update the minimal change below which an algorithm shall stop.

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenCostLess, :MinCost, v)

Update the minimal cost below which the algorithm shall stop

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenEntryChangeLess, :Threshold, v)

Update the minimal cost below which the algorithm shall stop

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenGradientChangeLess, :MinGradientChange, v)

Update the minimal change below which an algorithm shall stop.

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenGradientNormLess, :MinGradNorm, v::Float64)

Update the minimal gradient norm when an algorithm shall stop

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenStepsizeLess, :MinStepsize, v)

Update the minimal step size below which the algorithm shall stop

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenSubgradientNormLess, :MinSubgradNorm, v::Float64)

Update the minimal subgradient norm when an algorithm shall stop

source
diff --git a/dev/references/index.html b/dev/references/index.html index 7a9a2f95de..293b356597 100644 --- a/dev/references/index.html +++ b/dev/references/index.html @@ -1,2 +1,2 @@ -References · Manopt.jl

Literature

This is all literature mentioned / referenced in the Manopt.jl documentation. Usually you find a small reference section at the end of every documentation page that contains the corresponding references as well.

[ABG06]
P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).
[AMS08]
P.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.
[AOT22]
S. Adachi, T. Okuno and A. Takeda. Riemannian Levenberg-Marquardt Method with Global and Local Convergence Properties. ArXiv Preprint (2022).
[ABBC20]
N. Agarwal, N. Boumal, B. Bullins and C. Cartis. Adaptive regularization with cubics on manifolds. Mathematical Programming (2020).
[ACOO20]
Y. T. Almeida, J. X. Cruz Neto, P. R. Oliveira and J. C. Oliveira Souza. A modified proximal point method for DC functions on Hadamard manifolds. Computational Optimization and Applications 76, 649–673 (2020).
[Bac14]
M. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.
[Bea72]
E. M. Beale. A derivation of conjugate gradients. In: Numerical methods for nonlinear optimization, edited by F. A. Lootsma (Academic Press, London, London, 1972); pp. 39–43.
[BFSS23]
R. Bergmann, O. P. Ferreira, E. M. Santos and J. C. Souza. The difference of convex algorithm on Hadamard manifolds, arXiv preprint (2023).
[BG18]
R. Bergmann and P.-Y. Gousenbourger. A variational model for data fitting on manifolds by minimizing the acceleration of a Bézier curve. Frontiers in Applied Mathematics and Statistics 4 (2018), arXiv:1807.10090.
[BH19]
R. Bergmann and R. Herzog. Intrinsic formulation of KKT conditions and constraint qualifications on smooth manifolds. SIAM Journal on Optimization 29, 2423–2444 (2019), arXiv:1804.06214.
[BHJ24]
R. Bergmann, R. Herzog and H. Jasa. The Riemannian Convex Bundle Method, preprint (2024), arXiv:2402.13670.
[BHS+21]
R. Bergmann, R. Herzog, M. Silva Louzeiro, D. Tenbrinck and J. Vidal-Núñez. Fenchel duality theory and a primal-dual algorithm on Riemannian manifolds. Foundations of Computational Mathematics 21, 1465–1504 (2021), arXiv:1908.02022.
[BPS16]
R. Bergmann, J. Persch and G. Steidl. A parallel Douglas Rachford algorithm for minimizing ROF-like functionals on images with values in symmetric Hadamard manifolds. SIAM Journal on Imaging Sciences 9, 901–937 (2016), arXiv:1512.02814.
[Ber15]
D. P. Bertsekas. Convex Optimization Algorithms (Athena Scientific, 2015); p. 576.
[BIA10]
P. B. Borckmans, M. Ishteva and P.-A. Absil. A Modified Particle Swarm Optimization Algorithm for the Best Low Multilinear Rank Approximation of Higher-Order Tensors. In: 7th International Conference on Swarm INtelligence (Springer Berlin Heidelberg, 2010); pp. 13–23.
[Bou23]
[Car92]
M. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.
[CP11]
A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40, 120–145 (2011).
[CFFS10]
[CGT00]
A. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).
[DY99]
Y. H. Dai and Y. Yuan. A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM Journal on Optimization 10, 177–182 (1999).
[DL21]
W. Diepeveen and J. Lellmann. An Inexact Semismooth Newton Method on Riemannian Manifolds with Application to Duality-Based Total Variation Denoising. SIAM Journal on Imaging Sciences 14, 1565–1600 (2021), arXiv:2102.10309.
[ETTZ96]
A. S. El-Bakry, R. A. Tapia, T. Tsuchiya and Y. Zhang. On the formulation and theory of the Newton interior-point method for nonlinear programming. Journal of Optimization Theory and Applications 89, 507–541 (1996).
[FO98]
O. Ferreira and P. R. Oliveira. Subgradient algorithm on Riemannian manifolds. Journal of Optimization Theory and Applications 97, 93–104 (1998).
[FO02]
O. Ferreira and P. R. Oliveira. Proximal point algorithm on Riemannian manifolds. Optimization. A Journal of Mathematical Programming and Operations Research 51, 257–270 (2002).
[Fle13]
P. T. Fletcher. Geodesic regression and the theory of least squares on Riemannian manifolds. International Journal of Computer Vision 105, 171–185 (2013).
[Fle87]
R. Fletcher. Practical Methods of Optimization. 2 Edition, A Wiley-Interscience Publication (John Wiley & Sons Ltd., 1987).
[FR64]
R. Fletcher and C. M. Reeves. Function minimization by conjugate gradients. The Computer Journal 7, 149–154 (1964).
[GS23]
[HZ06]
W. W. Hager and H. Zhang. A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization 2, 35–58 (2006).
[HZ05]
W. W. Hager and H. Zhang. A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search. SIAM Journal on Optimization 16, 170–192 (2005).
[Han23]
N. Hansen. The CMA Evolution Strategy: A Tutorial. ArXiv Preprint (2023).
[HS52]
M. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards 49, 409 (1952).
[HNP23]
N. Hoseini Monjezi, S. Nobakhtian and M. R. Pouryayevali. A proximal bundle algorithm for nonsmooth optimization on Riemannian manifolds. IMA Journal of Numerical Analysis 43, 293–325 (2023).
[Hua14]
W. Huang. Optimization algorithms on Riemannian manifolds with applications. Ph.D. Thesis, Flordia State University (2014).
[HAG18]
W. Huang, P.-A. Absil and K. A. Gallivan. A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems. SIAM Journal on Optimization 28, 470–495 (2018).
[HGA15]
W. Huang, K. A. Gallivan and P.-A. Absil. A Broyden class of quasi-Newton methods for Riemannian optimization. SIAM Journal on Optimization 25, 1660–1685 (2015).
[IP17]
B. Iannazzo and M. Porcelli. The Riemannian Barzilai–Borwein method with nonmonotone line search and the matrix geometric mean computation. IMA Journal of Numerical Analysis 38, 495–517 (2017).
[Kar77]
H. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).
[LY24]
Z. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.
[LB19]
C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.
[LS91]
Y. Liu and C. Storey. Efficient generalized conjugate gradient algorithms, part 1: Theory. Journal of Optimization Theory and Applications 69, 129–137 (1991).
[Ngu23]
D. Nguyen. Operator-Valued Formulas for Riemannian Gradient and Hessian and Families of Tractable Metrics in Riemannian Optimization. Journal of Optimization Theory and Applications 198, 135–164 (2023), arXiv:2009.10159.
[NW06]
J. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).
[Pee93]
R. Peeters. On a Riemannian version of the Levenberg-Marquardt algorithm. Serie Research Memoranda 0011 (VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics, 1993).
[PR69]
E. Polak and G. Ribière. Note sur la convergence de méthodes de directions conjuguées. Revue française d’informatique et de recherche opérationnelle 3, 35–43 (1969).
[Pow77]
M. J. Powell. Restart procedures for the conjugate gradient method. Mathematical Programming 12, 241–254 (1977).
[SO15]
J. C. Souza and P. R. Oliveira. A proximal point algorithm for DC fuctions on Hadamard manifolds. Journal of Global Optimization 63, 797–810 (2015).
[WS22]
M. Weber and S. Sra. Riemannian Optimization via Frank-Wolfe Methods. Mathematical Programming 199, 525–556 (2022).
[ZS18]
H. Zhang and S. Sra. Towards Riemannian accelerated gradient methods, arXiv Preprint, 1806.02812 (2018).
+References · Manopt.jl

Literature

This is all literature mentioned / referenced in the Manopt.jl documentation. Usually you find a small reference section at the end of every documentation page that contains the corresponding references as well.

[ABG06]
P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).
[AMS08]
P.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.
[AOT22]
S. Adachi, T. Okuno and A. Takeda. Riemannian Levenberg-Marquardt Method with Global and Local Convergence Properties. ArXiv Preprint (2022).
[ABBC20]
N. Agarwal, N. Boumal, B. Bullins and C. Cartis. Adaptive regularization with cubics on manifolds. Mathematical Programming (2020).
[ACOO20]
Y. T. Almeida, J. X. Cruz Neto, P. R. Oliveira and J. C. Oliveira Souza. A modified proximal point method for DC functions on Hadamard manifolds. Computational Optimization and Applications 76, 649–673 (2020).
[Bac14]
M. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.
[Bea72]
E. M. Beale. A derivation of conjugate gradients. In: Numerical methods for nonlinear optimization, edited by F. A. Lootsma (Academic Press, London, London, 1972); pp. 39–43.
[BFSS23]
R. Bergmann, O. P. Ferreira, E. M. Santos and J. C. Souza. The difference of convex algorithm on Hadamard manifolds, arXiv preprint (2023).
[BG18]
R. Bergmann and P.-Y. Gousenbourger. A variational model for data fitting on manifolds by minimizing the acceleration of a Bézier curve. Frontiers in Applied Mathematics and Statistics 4 (2018), arXiv:1807.10090.
[BH19]
R. Bergmann and R. Herzog. Intrinsic formulation of KKT conditions and constraint qualifications on smooth manifolds. SIAM Journal on Optimization 29, 2423–2444 (2019), arXiv:1804.06214.
[BHJ24]
R. Bergmann, R. Herzog and H. Jasa. The Riemannian Convex Bundle Method, preprint (2024), arXiv:2402.13670.
[BHS+21]
R. Bergmann, R. Herzog, M. Silva Louzeiro, D. Tenbrinck and J. Vidal-Núñez. Fenchel duality theory and a primal-dual algorithm on Riemannian manifolds. Foundations of Computational Mathematics 21, 1465–1504 (2021), arXiv:1908.02022.
[BPS16]
R. Bergmann, J. Persch and G. Steidl. A parallel Douglas Rachford algorithm for minimizing ROF-like functionals on images with values in symmetric Hadamard manifolds. SIAM Journal on Imaging Sciences 9, 901–937 (2016), arXiv:1512.02814.
[Ber15]
D. P. Bertsekas. Convex Optimization Algorithms (Athena Scientific, 2015); p. 576.
[BIA10]
P. B. Borckmans, M. Ishteva and P.-A. Absil. A Modified Particle Swarm Optimization Algorithm for the Best Low Multilinear Rank Approximation of Higher-Order Tensors. In: 7th International Conference on Swarm INtelligence (Springer Berlin Heidelberg, 2010); pp. 13–23.
[Bou23]
[Car92]
M. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.
[CP11]
A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40, 120–145 (2011).
[CFFS10]
[CGT00]
A. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).
[DY99]
Y. H. Dai and Y. Yuan. A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM Journal on Optimization 10, 177–182 (1999).
[DL21]
W. Diepeveen and J. Lellmann. An Inexact Semismooth Newton Method on Riemannian Manifolds with Application to Duality-Based Total Variation Denoising. SIAM Journal on Imaging Sciences 14, 1565–1600 (2021), arXiv:2102.10309.
[ETTZ96]
A. S. El-Bakry, R. A. Tapia, T. Tsuchiya and Y. Zhang. On the formulation and theory of the Newton interior-point method for nonlinear programming. Journal of Optimization Theory and Applications 89, 507–541 (1996).
[FO98]
O. Ferreira and P. R. Oliveira. Subgradient algorithm on Riemannian manifolds. Journal of Optimization Theory and Applications 97, 93–104 (1998).
[FO02]
O. Ferreira and P. R. Oliveira. Proximal point algorithm on Riemannian manifolds. Optimization. A Journal of Mathematical Programming and Operations Research 51, 257–270 (2002).
[Fle13]
P. T. Fletcher. Geodesic regression and the theory of least squares on Riemannian manifolds. International Journal of Computer Vision 105, 171–185 (2013).
[Fle87]
R. Fletcher. Practical Methods of Optimization. 2 Edition, A Wiley-Interscience Publication (John Wiley & Sons Ltd., 1987).
[FR64]
R. Fletcher and C. M. Reeves. Function minimization by conjugate gradients. The Computer Journal 7, 149–154 (1964).
[GS23]
[HZ06]
W. W. Hager and H. Zhang. A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization 2, 35–58 (2006).
[HZ05]
W. W. Hager and H. Zhang. A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search. SIAM Journal on Optimization 16, 170–192 (2005).
[Han23]
N. Hansen. The CMA Evolution Strategy: A Tutorial. ArXiv Preprint (2023).
[HS52]
M. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards 49, 409 (1952).
[HNP23]
N. Hoseini Monjezi, S. Nobakhtian and M. R. Pouryayevali. A proximal bundle algorithm for nonsmooth optimization on Riemannian manifolds. IMA Journal of Numerical Analysis 43, 293–325 (2023).
[Hua14]
W. Huang. Optimization algorithms on Riemannian manifolds with applications. Ph.D. Thesis, Flordia State University (2014).
[HAG18]
W. Huang, P.-A. Absil and K. A. Gallivan. A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems. SIAM Journal on Optimization 28, 470–495 (2018).
[HGA15]
W. Huang, K. A. Gallivan and P.-A. Absil. A Broyden class of quasi-Newton methods for Riemannian optimization. SIAM Journal on Optimization 25, 1660–1685 (2015).
[IP17]
B. Iannazzo and M. Porcelli. The Riemannian Barzilai–Borwein method with nonmonotone line search and the matrix geometric mean computation. IMA Journal of Numerical Analysis 38, 495–517 (2017).
[Kar77]
H. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).
[LY24]
Z. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.
[LB19]
C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.
[LS91]
Y. Liu and C. Storey. Efficient generalized conjugate gradient algorithms, part 1: Theory. Journal of Optimization Theory and Applications 69, 129–137 (1991).
[Ngu23]
D. Nguyen. Operator-Valued Formulas for Riemannian Gradient and Hessian and Families of Tractable Metrics in Riemannian Optimization. Journal of Optimization Theory and Applications 198, 135–164 (2023), arXiv:2009.10159.
[NW06]
J. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).
[Pee93]
R. Peeters. On a Riemannian version of the Levenberg-Marquardt algorithm. Serie Research Memoranda 0011 (VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics, 1993).
[PR69]
E. Polak and G. Ribière. Note sur la convergence de méthodes de directions conjuguées. Revue française d’informatique et de recherche opérationnelle 3, 35–43 (1969).
[Pow77]
M. J. Powell. Restart procedures for the conjugate gradient method. Mathematical Programming 12, 241–254 (1977).
[SO15]
J. C. Souza and P. R. Oliveira. A proximal point algorithm for DC fuctions on Hadamard manifolds. Journal of Global Optimization 63, 797–810 (2015).
[WS22]
M. Weber and S. Sra. Riemannian Optimization via Frank-Wolfe Methods. Mathematical Programming 199, 525–556 (2022).
[ZS18]
H. Zhang and S. Sra. Towards Riemannian accelerated gradient methods, arXiv Preprint, 1806.02812 (2018).
diff --git a/dev/search_index.js b/dev/search_index.js index 2b20907359..9d9b46ee70 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"notation/#Notation","page":"Notation","title":"Notation","text":"","category":"section"},{"location":"notation/","page":"Notation","title":"Notation","text":"In this package,the notation introduced in Manifolds.jl Notation is used with the following additional parts.","category":"page"},{"location":"notation/","page":"Notation","title":"Notation","text":"Symbol Description Also used Comment\noperatornameargmin argument of a function f where a local or global minimum is attained \nk the current iterate ì the goal is to unify this to k\n The Levi-Cevita connection ","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#Using-Automatic-Differentiation-in-Manopt.jl","page":"Use automatic differentiation","title":"Using Automatic Differentiation in Manopt.jl","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Since Manifolds.jl 0.7, the support of automatic differentiation support has been extended.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"This tutorial explains how to use Euclidean tools to derive a gradient for a real-valued function f mathcal M ℝ. Two methods are considered: an intrinsic variant and a variant employing the embedding. These gradients can then be used within any gradient based optimization algorithm in Manopt.jl.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"While by default FiniteDifferences.jlare used, one can also use FiniteDiff.jl, ForwardDiff.jl, ReverseDiff.jl, or Zygote.jl.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"This tutorial looks at a few possibilities to approximate or derive the gradient of a function fmathcal M ℝ on a Riemannian manifold, without computing it yourself. There are mainly two different philosophies:","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Working intrinsically, that is staying on the manifold and in the tangent spaces, considering to approximate the gradient by forward differences.\nWorking in an embedding where all tools from functions on Euclidean spaces can be used, like finite differences or automatic differentiation, and then compute the corresponding Riemannian gradient from there.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"First, load all necessary packages","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"using Manopt, Manifolds, Random, LinearAlgebra\nusing FiniteDifferences, ManifoldDiff\nRandom.seed!(42);","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#1.-(Intrinsic)-forward-differences","page":"Use automatic differentiation","title":"1. (Intrinsic) forward differences","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"A first idea is to generalize (multivariate) finite differences to Riemannian manifolds. Let X_1ldotsX_d T_pmathcal M denote an orthonormal basis of the tangent space T_pmathcal M at the point pmathcal M on the Riemannian manifold.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The notion of a directional derivative is generalized to a “direction” YT_pmathcal M. Let c -εε, ε0, be a curve with c(0) = p, dot c(0) = Y, for example c(t)= exp_p(tY). This yields","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":" Df(p)Y = left fracddt right_t=0 f(c(t)) = lim_t 0 frac1t(f(exp_p(tY))-f(p))","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The differential Df(p)X is approximated by a finite difference scheme for an h0 as","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"DF(p)Y G_h(Y) = frac1h(f(exp_p(hY))-f(p))","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Furthermore the gradient operatornamegradf is the Riesz representer of the differential:","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":" Df(p)Y = g_p(operatornamegradf(p) Y)qquad text for all Y T_pmathcal M","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"and since it is a tangent vector, we can write it in terms of a basis as","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":" operatornamegradf(p) = sum_i=1^d g_p(operatornamegradf(p)X_i)X_i\n = sum_i=1^d Df(p)X_iX_i","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"and perform the approximation from before to obtain","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":" operatornamegradf(p) sum_i=1^d G_h(X_i)X_i","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"for some suitable step size h. This comes at the cost of d+1 function evaluations and d exponential maps.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"This is the first variant we can use. An advantage is that it is intrinsic in the sense that it does not require any embedding of the manifold.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#An-example:-the-Rayleigh-quotient","page":"Use automatic differentiation","title":"An example: the Rayleigh quotient","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The Rayleigh quotient is concerned with finding eigenvalues (and eigenvectors) of a symmetric matrix A ℝ^(n+1)(n+1). The optimization problem reads","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"F ℝ^n+1 ℝquad F(mathbf x) = fracmathbf x^mathrmTAmathbf xmathbf x^mathrmTmathbf x","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Minimizing this function yields the smallest eigenvalue lambda_1 as a value and the corresponding minimizer mathbf x^* is a corresponding eigenvector.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Since the length of an eigenvector is irrelevant, there is an ambiguity in the cost function. It can be better phrased on the sphere $ 𝕊^n$ of unit vectors in ℝ^n+1,","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"operatorname*argmin_p 𝕊^n f(p) = operatorname*argmin_ p 𝕊^n p^mathrmTAp","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We can compute the Riemannian gradient exactly as","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"operatornamegrad f(p) = 2(Ap - pp^mathrmTAp)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"so we can compare it to the approximation by finite differences.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"n = 200\nA = randn(n + 1, n + 1)\nA = Symmetric(A)\nM = Sphere(n);\n\nf1(p) = p' * A'p\ngradf1(p) = 2 * (A * p - p * p' * A * p)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"gradf1 (generic function with 1 method)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Manifolds provides a finite difference scheme in tangent spaces, that you can introduce to use an existing framework (if the wrapper is implemented) form Euclidean space. Here we use FiniteDiff.jl.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"r_backend = ManifoldDiff.TangentDiffBackend(\n ManifoldDiff.FiniteDifferencesBackend()\n)\ngradf1_FD(p) = ManifoldDiff.gradient(M, f1, p, r_backend)\n\np = zeros(n + 1)\np[1] = 1.0\nX1 = gradf1(p)\nX2 = gradf1_FD(p)\nnorm(M, p, X1 - X2)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"1.018153081967174e-12","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We obtain quite a good approximation of the gradient.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#EmbeddedGradient","page":"Use automatic differentiation","title":"2. Conversion of a Euclidean Gradient in the Embedding to a Riemannian Gradient of a (not Necessarily Isometrically) Embedded Manifold","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Let tilde f ℝ^m ℝ be a function on the embedding of an n-dimensional manifold mathcal M subset ℝ^mand let f mathcal M ℝ denote the restriction of tilde f to the manifold mathcal M.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Since we can use the pushforward of the embedding to also embed the tangent space T_pmathcal M, pmathcal M, we can similarly obtain the differential Df(p) T_pmathcal M ℝ by restricting the differential Dtilde f(p) to the tangent space.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"If both T_pmathcal M and T_pℝ^m have the same inner product, or in other words the manifold is isometrically embedded in ℝ^m (like for example the sphere mathbb S^nsubsetℝ^m+1), then this restriction of the differential directly translates to a projection of the gradient","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"operatornamegradf(p) = operatornameProj_T_pmathcal M(operatornamegrad tilde f(p))","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"More generally take a change of the metric into account as","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"langle operatornameProj_T_pmathcal M(operatornamegrad tilde f(p)) X rangle\n= Df(p)X = g_p(operatornamegradf(p) X)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"or in words: we have to change the Riesz representer of the (restricted/projected) differential of f (tilde f) to the one with respect to the Riemannian metric. This is done using change_representer.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#A-continued-example","page":"Use automatic differentiation","title":"A continued example","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We continue with the Rayleigh Quotient from before, now just starting with the definition of the Euclidean case in the embedding, the function F.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"F(x) = x' * A * x / (x' * x);","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The cost function is the same by restriction","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"f2(M, p) = F(p);","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The gradient is now computed combining our gradient scheme with FiniteDifferences.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"function grad_f2_AD(M, p)\n return Manifolds.gradient(\n M, F, p, Manifolds.RiemannianProjectionBackend(ManifoldDiff.FiniteDifferencesBackend())\n )\nend\nX3 = grad_f2_AD(M, p)\nnorm(M, p, X1 - X3)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"1.742525831800539e-12","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#An-example-for-a-non-isometrically-embedded-manifold","page":"Use automatic differentiation","title":"An example for a non-isometrically embedded manifold","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"on the manifold mathcal P(3) of symmetric positive definite matrices.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The following function computes (half) the distance squared (with respect to the linear affine metric) on the manifold mathcal P(3) to the identity matrix I_3. Denoting the unit matrix we consider the function","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":" G(q)\n = frac12d^2_mathcal P(3)(qI_3)\n = lVert operatornameLog(q) rVert_F^2","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"where operatornameLog denotes the matrix logarithm and lVert cdot rVert_F is the Frobenius norm. This can be computed for symmetric positive definite matrices by summing the squares of the logarithms of the eigenvalues of q and dividing by two:","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"G(q) = sum(log.(eigvals(Symmetric(q))) .^ 2) / 2","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"G (generic function with 1 method)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We can also interpret this as a function on the space of matrices and apply the Euclidean finite differences machinery; in this way we can easily derive the Euclidean gradient. But when computing the Riemannian gradient, we have to change the representer (see again change_representer) after projecting onto the tangent space T_pmathcal P(n) at p.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Let’s first define a point and the manifold N=mathcal P(3).","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"rotM(α) = [1.0 0.0 0.0; 0.0 cos(α) sin(α); 0.0 -sin(α) cos(α)]\nq = rotM(π / 6) * [1.0 0.0 0.0; 0.0 2.0 0.0; 0.0 0.0 3.0] * transpose(rotM(π / 6))\nN = SymmetricPositiveDefinite(3)\nis_point(N, q)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"true","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We could first just compute the gradient using FiniteDifferences.jl, but this yields the Euclidean gradient:","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"FiniteDifferences.grad(central_fdm(5, 1), G, q)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"([3.240417492806275e-14 -2.3531899864903462e-14 0.0; 0.0 0.3514812167654708 0.017000516835452926; 0.0 0.0 0.36129646973723023],)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Instead, we use the RiemannianProjectedBackend of Manifolds.jl, which in this case internally uses FiniteDifferences.jl to compute a Euclidean gradient but then uses the conversion explained before to derive the Riemannian gradient.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We define this here again as a function grad_G_FD that could be used in the Manopt.jl framework within a gradient based optimization.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"function grad_G_FD(N, q)\n return Manifolds.gradient(\n N, G, q, ManifoldDiff.RiemannianProjectionBackend(ManifoldDiff.FiniteDifferencesBackend())\n )\nend\nG1 = grad_G_FD(N, q)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"3×3 Matrix{Float64}:\n 3.24042e-14 -2.64734e-14 -5.09481e-15\n -2.64734e-14 1.86368 0.826856\n -5.09481e-15 0.826856 2.81845","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Now, we can again compare this to the (known) solution of the gradient, namely the gradient of (half of) the distance squared G(q) = frac12d^2_mathcal P(3)(qI_3) is given by operatornamegrad G(q) = -operatornamelog_q I_3, where operatornamelog is the logarithmic map on the manifold.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"G2 = -log(N, q, Matrix{Float64}(I, 3, 3))","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"3×3 Matrix{Float64}:\n -0.0 -0.0 -0.0\n -0.0 1.86368 0.826856\n -0.0 0.826856 2.81845","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Both terms agree up to 1810^-12:","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"norm(G1 - G2)\nisapprox(M, q, G1, G2; atol=2 * 1e-12)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"true","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#Summary","page":"Use automatic differentiation","title":"Summary","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"This tutorial illustrates how to use tools from Euclidean spaces, finite differences or automatic differentiation, to compute gradients on Riemannian manifolds. The scheme allows to use any differentiation framework within the embedding to derive a Riemannian gradient.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#Technical-details","page":"Use automatic differentiation","title":"Technical details","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"2024-11-21T20:35:25.554","category":"page"},{"location":"solvers/proximal_point/#Proximal-point-method","page":"Proximal point method","title":"Proximal point method","text":"","category":"section"},{"location":"solvers/proximal_point/","page":"Proximal point method","title":"Proximal point method","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/proximal_point/","page":"Proximal point method","title":"Proximal point method","text":"proximal_point\nproximal_point!","category":"page"},{"location":"solvers/proximal_point/#Manopt.proximal_point","page":"Proximal point method","title":"Manopt.proximal_point","text":"proximal_point(M, prox_f, p=rand(M); kwargs...)\nproximal_point(M, mpmo, p=rand(M); kwargs...)\nproximal_point!(M, prox_f, p; kwargs...)\nproximal_point!(M, mpmo, p; kwargs...)\n\nPerform the proximal point algoritm from [FO02] which reads\n\np^(k+1) = operatornameprox_λ_kf(p^(k))\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nprox_f: a proximal map (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of f (see evaluation)\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nf=nothing: a cost function f mathcal Mℝ to minimize. For running the algorithm, f is not required, but for example when recording the cost or using a stopping criterion that requires a cost function.\nλ= k -> 1.0: a function returning the (square summable but not summable) sequence of λ_i\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/proximal_point/#Manopt.proximal_point!","page":"Proximal point method","title":"Manopt.proximal_point!","text":"proximal_point(M, prox_f, p=rand(M); kwargs...)\nproximal_point(M, mpmo, p=rand(M); kwargs...)\nproximal_point!(M, prox_f, p; kwargs...)\nproximal_point!(M, mpmo, p; kwargs...)\n\nPerform the proximal point algoritm from [FO02] which reads\n\np^(k+1) = operatornameprox_λ_kf(p^(k))\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nprox_f: a proximal map (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of f (see evaluation)\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nf=nothing: a cost function f mathcal Mℝ to minimize. For running the algorithm, f is not required, but for example when recording the cost or using a stopping criterion that requires a cost function.\nλ= k -> 1.0: a function returning the (square summable but not summable) sequence of λ_i\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/proximal_point/#State","page":"Proximal point method","title":"State","text":"","category":"section"},{"location":"solvers/proximal_point/","page":"Proximal point method","title":"Proximal point method","text":"ProximalPointState","category":"page"},{"location":"solvers/proximal_point/#Manopt.ProximalPointState","page":"Proximal point method","title":"Manopt.ProximalPointState","text":"ProximalPointState{P} <: AbstractGradientSolverState\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nλ: a function for the values of λ_k per iteration(cycle k\n\nConstructor\n\nProximalPointState(M::AbstractManifold; kwargs...)\n\nInitialize the proximal point method solver state, where\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\n\nKeyword arguments\n\nλ=k -> 1.0 a function to compute the λ_k k mathcal N,\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopAfterIteration(100): a functor indicating that the stopping criterion is fulfilled\n\nSee also\n\nproximal_point\n\n\n\n\n\n","category":"type"},{"location":"solvers/proximal_point/","page":"Proximal point method","title":"Proximal point method","text":"O. Ferreira and P. R. Oliveira. Proximal point algorithm on Riemannian manifolds. Optimization. A Journal of Mathematical Programming and Operations Research 51, 257–270 (2002).\n\n\n\n","category":"page"},{"location":"solvers/conjugate_gradient_descent/#Conjugate-gradient-descent","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"","category":"section"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"conjugate_gradient_descent\nconjugate_gradient_descent!","category":"page"},{"location":"solvers/conjugate_gradient_descent/#Manopt.conjugate_gradient_descent","page":"Conjugate gradient descent","title":"Manopt.conjugate_gradient_descent","text":"conjugate_gradient_descent(M, f, grad_f, p=rand(M))\nconjugate_gradient_descent!(M, f, grad_f, p)\nconjugate_gradient_descent(M, gradient_objective, p)\nconjugate_gradient_descent!(M, gradient_objective, p; kwargs...)\n\nperform a conjugate gradient based descent-\n\np_k+1 = operatornameretr_p_k bigl( s_kδ_k bigr)\n\nwhere operatornameretr denotes a retraction on the Manifold M and one can employ different rules to update the descent direction δ_k based on the last direction δ_k-1 and both gradients operatornamegradf(x_k),operatornamegrad f(x_k-1). The Stepsize s_k may be determined by a Linesearch.\n\nAlternatively to f and grad_f you can provide the AbstractManifoldGradientObjective gradient_objective directly.\n\nAvailable update rules are SteepestDescentCoefficientRule, which yields a gradient_descent, ConjugateDescentCoefficient (the default), DaiYuanCoefficientRule, FletcherReevesCoefficient, HagerZhangCoefficient, HestenesStiefelCoefficient, LiuStoreyCoefficient, and PolakRibiereCoefficient. These can all be combined with a ConjugateGradientBealeRestartRule rule.\n\nThey all compute β_k such that this algorithm updates the search direction as\n\nδ_k=operatornamegradf(p_k) + β_k delta_k-1\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\ncoefficient::DirectionUpdateRule=ConjugateDescentCoefficient(): rule to compute the descent direction update coefficient β_k, as a functor, where the resulting function maps are (amp, cgs, k) -> β with amp an AbstractManoptProblem, cgs is the ConjugateGradientDescentState, and k is the current iterate.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/conjugate_gradient_descent/#Manopt.conjugate_gradient_descent!","page":"Conjugate gradient descent","title":"Manopt.conjugate_gradient_descent!","text":"conjugate_gradient_descent(M, f, grad_f, p=rand(M))\nconjugate_gradient_descent!(M, f, grad_f, p)\nconjugate_gradient_descent(M, gradient_objective, p)\nconjugate_gradient_descent!(M, gradient_objective, p; kwargs...)\n\nperform a conjugate gradient based descent-\n\np_k+1 = operatornameretr_p_k bigl( s_kδ_k bigr)\n\nwhere operatornameretr denotes a retraction on the Manifold M and one can employ different rules to update the descent direction δ_k based on the last direction δ_k-1 and both gradients operatornamegradf(x_k),operatornamegrad f(x_k-1). The Stepsize s_k may be determined by a Linesearch.\n\nAlternatively to f and grad_f you can provide the AbstractManifoldGradientObjective gradient_objective directly.\n\nAvailable update rules are SteepestDescentCoefficientRule, which yields a gradient_descent, ConjugateDescentCoefficient (the default), DaiYuanCoefficientRule, FletcherReevesCoefficient, HagerZhangCoefficient, HestenesStiefelCoefficient, LiuStoreyCoefficient, and PolakRibiereCoefficient. These can all be combined with a ConjugateGradientBealeRestartRule rule.\n\nThey all compute β_k such that this algorithm updates the search direction as\n\nδ_k=operatornamegradf(p_k) + β_k delta_k-1\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\ncoefficient::DirectionUpdateRule=ConjugateDescentCoefficient(): rule to compute the descent direction update coefficient β_k, as a functor, where the resulting function maps are (amp, cgs, k) -> β with amp an AbstractManoptProblem, cgs is the ConjugateGradientDescentState, and k is the current iterate.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/conjugate_gradient_descent/#State","page":"Conjugate gradient descent","title":"State","text":"","category":"section"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"ConjugateGradientDescentState","category":"page"},{"location":"solvers/conjugate_gradient_descent/#Manopt.ConjugateGradientDescentState","page":"Conjugate gradient descent","title":"Manopt.ConjugateGradientDescentState","text":"ConjugateGradientState <: AbstractGradientSolverState\n\nspecify options for a conjugate gradient descent algorithm, that solves a [DefaultManoptProblem].\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal M\nδ: the current descent direction, also a tangent vector\nβ: the current update coefficient rule, see .\ncoefficient: function to determine the new β\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nConjugateGradientState(M::AbstractManifold; kwargs...)\n\nwhere the last five fields can be set by their names as keyword and the X can be set to a tangent vector type using the keyword initial_gradient which defaults to zero_vector(M,p), and δ is initialized to a copy of this vector.\n\nKeyword arguments\n\nThe following fields from above β_k to compute the conjugate gradient update coefficient based on a restart idea of [Bea72], following [HZ06, page 12] adapted to manifolds.\n\nFields\n\ndirection_update::DirectionUpdateRule: the actual rule, that is restarted\nthreshold::Real: a threshold for the restart check.\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nConjugateGradientBealeRestartRule(\n direction_update::Union{DirectionUpdateRule,ManifoldDefaultsFactory};\n kwargs...\n)\nConjugateGradientBealeRestartRule(\n M::AbstractManifold=DefaultManifold(),\n direction_update::Union{DirectionUpdateRule,ManifoldDefaultsFactory};\n kwargs...\n)\n\nConstruct the Beale restart coefficient update rule adapted to manifolds.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M If this is not provided, the DefaultManifold() from ManifoldsBase.jl is used.\ndirection_update: a DirectionUpdateRule or a corresponding ManifoldDefaultsFactory to produce such a rule.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nthreshold=0.2\n\nSee also\n\nConjugateGradientBealeRestart, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.DaiYuanCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.DaiYuanCoefficientRule","text":"DaiYuanCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [DY99] adapted to manifolds\n\nFields\n\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nDaiYuanCoefficientRule(M::AbstractManifold; kwargs...)\n\nConstruct the Dai—Yuan coefficient update rule.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nDaiYuanCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.FletcherReevesCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.FletcherReevesCoefficientRule","text":"FletcherReevesCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [FR64] adapted to manifolds\n\nConstructor\n\nFletcherReevesCoefficientRule()\n\nConstruct the Fletcher—Reeves coefficient update rule.\n\nSee also\n\nFletcherReevesCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.HagerZhangCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.HagerZhangCoefficientRule","text":"HagerZhangCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [HZ05] adapted to manifolds\n\nFields\n\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nHagerZhangCoefficientRule(M::AbstractManifold; kwargs...)\n\nConstruct the Hager-Zang coefficient update rule based on [HZ05] adapted to manifolds.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nHagerZhangCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.HestenesStiefelCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.HestenesStiefelCoefficientRule","text":"HestenesStiefelCoefficientRuleRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [HS52] adapted to manifolds\n\nFields\n\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nHestenesStiefelCoefficientRuleRule(M::AbstractManifold; kwargs...)\n\nConstruct the Hestenes-Stiefel coefficient update rule based on [HS52] adapted to manifolds.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nHestenesStiefelCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.LiuStoreyCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.LiuStoreyCoefficientRule","text":"LiuStoreyCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [LS91] adapted to manifolds\n\nFields\n\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nLiuStoreyCoefficientRule(M::AbstractManifold; kwargs...)\n\nConstruct the Lui-Storey coefficient update rule based on [LS91] adapted to manifolds.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nLiuStoreyCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.PolakRibiereCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.PolakRibiereCoefficientRule","text":"PolakRibiereCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [PR69] adapted to manifolds\n\nFields\n\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nPolakRibiereCoefficientRule(M::AbstractManifold; kwargs...)\n\nConstruct the Dai—Yuan coefficient update rule.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nPolakRibiereCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.SteepestDescentCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.SteepestDescentCoefficientRule","text":"SteepestDescentCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient to obtain the steepest direction, that is β_k=0.\n\nConstructor\n\nSteepestDescentCoefficientRule()\n\nConstruct the steepest descent coefficient update rule.\n\nSee also\n\nSteepestDescentCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#sec-cgd-technical-details","page":"Conjugate gradient descent","title":"Technical details","text":"","category":"section"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"The conjugate_gradient_descent solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= or vector_transport_method_dual= (for mathcal N) does not have to be specified.\nBy default gradient descent uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).\nBy default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/conjugate_gradient_descent/#Literature","page":"Conjugate gradient descent","title":"Literature","text":"","category":"section"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"E. M. Beale. A derivation of conjugate gradients. In: Numerical methods for nonlinear optimization, edited by F. A. Lootsma (Academic Press, London, London, 1972); pp. 39–43.\n\n\n\nY. H. Dai and Y. Yuan. A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM Journal on Optimization 10, 177–182 (1999).\n\n\n\nR. Fletcher. Practical Methods of Optimization. 2 Edition, A Wiley-Interscience Publication (John Wiley & Sons Ltd., 1987).\n\n\n\nR. Fletcher and C. M. Reeves. Function minimization by conjugate gradients. The Computer Journal 7, 149–154 (1964).\n\n\n\nW. W. Hager and H. Zhang. A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization 2, 35–58 (2006).\n\n\n\nW. W. Hager and H. Zhang. A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search. SIAM Journal on Optimization 16, 170–192 (2005).\n\n\n\nM. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards 49, 409 (1952).\n\n\n\nY. Liu and C. Storey. Efficient generalized conjugate gradient algorithms, part 1: Theory. Journal of Optimization Theory and Applications 69, 129–137 (1991).\n\n\n\nE. Polak and G. Ribière. Note sur la convergence de méthodes de directions conjuguées. Revue française d’informatique et de recherche opérationnelle 3, 35–43 (1969).\n\n\n\nM. J. Powell. Restart procedures for the conjugate gradient method. Mathematical Programming 12, 241–254 (1977).\n\n\n\n","category":"page"},{"location":"solvers/convex_bundle_method/#Convex-bundle-method","page":"Convex bundle method","title":"Convex bundle method","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"convex_bundle_method\nconvex_bundle_method!","category":"page"},{"location":"solvers/convex_bundle_method/#Manopt.convex_bundle_method","page":"Convex bundle method","title":"Manopt.convex_bundle_method","text":"convex_bundle_method(M, f, ∂f, p)\nconvex_bundle_method!(M, f, ∂f, p)\n\nperform a convex bundle method p^(k+1) = operatornameretr_p^(k)(-g_k) where\n\ng_k = sum_jin J_k λ_j^k mathrmP_p_kq_jX_q_j\n\nand p_k is the last serious iterate, X_q_j f(q_j), and the λ_j^k are solutions to the quadratic subproblem provided by the convex_bundle_method_subsolver.\n\nThough the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.\n\nFor more details, see [BHJ24].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\natol_λ=eps() : tolerance parameter for the convex coefficients in λ.\natol_errors=eps(): : tolerance parameter for the linearization errors.\nbundle_cap=25`\nm=1e-3: : the parameter to test the decrease of the cost: f(q_k+1) f(p_k) + m ξ.\ndiameter=50.0: estimate for the diameter of the level set of the objective function at the starting point.\ndomain=(M, p) -> isfinite(f(M, p)): a function to that evaluates to true when the current candidate is in the domain of the objective f, and false otherwise.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nk_max=0: upper bound on the sectional curvature of the manifold.\nstepsize=default_stepsize(M, ConvexBundleMethodState): a functor inheriting from Stepsize to determine a step size\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses* inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nsub_state=convex_bundle_method_subsolver`: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=AllocatingEvaluation: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/convex_bundle_method/#Manopt.convex_bundle_method!","page":"Convex bundle method","title":"Manopt.convex_bundle_method!","text":"convex_bundle_method(M, f, ∂f, p)\nconvex_bundle_method!(M, f, ∂f, p)\n\nperform a convex bundle method p^(k+1) = operatornameretr_p^(k)(-g_k) where\n\ng_k = sum_jin J_k λ_j^k mathrmP_p_kq_jX_q_j\n\nand p_k is the last serious iterate, X_q_j f(q_j), and the λ_j^k are solutions to the quadratic subproblem provided by the convex_bundle_method_subsolver.\n\nThough the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.\n\nFor more details, see [BHJ24].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\natol_λ=eps() : tolerance parameter for the convex coefficients in λ.\natol_errors=eps(): : tolerance parameter for the linearization errors.\nbundle_cap=25`\nm=1e-3: : the parameter to test the decrease of the cost: f(q_k+1) f(p_k) + m ξ.\ndiameter=50.0: estimate for the diameter of the level set of the objective function at the starting point.\ndomain=(M, p) -> isfinite(f(M, p)): a function to that evaluates to true when the current candidate is in the domain of the objective f, and false otherwise.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nk_max=0: upper bound on the sectional curvature of the manifold.\nstepsize=default_stepsize(M, ConvexBundleMethodState): a functor inheriting from Stepsize to determine a step size\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses* inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nsub_state=convex_bundle_method_subsolver`: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=AllocatingEvaluation: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/convex_bundle_method/#State","page":"Convex bundle method","title":"State","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"ConvexBundleMethodState","category":"page"},{"location":"solvers/convex_bundle_method/#Manopt.ConvexBundleMethodState","page":"Convex bundle method","title":"Manopt.ConvexBundleMethodState","text":"ConvexBundleMethodState <: AbstractManoptSolverState\n\nStores option values for a convex_bundle_method solver.\n\nFields\n\nTHe following fields require a (real) number type R, as well as point type P and a tangent vector type T`\n\natol_λ::R: tolerance parameter for the convex coefficients in λ\n`atol_errors::R: tolerance parameter for the linearization errors\nbundle<:AbstractVector{Tuple{<:P,<:T}}: bundle that collects each iterate with the computed subgradient at the iterate\nbundle_cap::Int: the maximal number of elements the bundle is allowed to remember\ndiameter::R: estimate for the diameter of the level set of the objective function at the starting point\ndomain: the domain offas a function(M,p) -> bthat evaluates to true when the current candidate is in the domain off`, and false otherwise,\ng::T: descent direction\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nk_max::R: upper bound on the sectional curvature of the manifold\nlinearization_errors<:AbstractVector{<:R}: linearization errors at the last serious step\nm::R: the parameter to test the decrease of the cost: f(q_k+1) f(p_k) + m ξ.\np::P: a point on the manifold mathcal Mstoring the current iterate\np_last_serious::P: last serious iterate\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\ntransported_subgradients: subgradients of the bundle that are transported to p_last_serious\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring a subgradient at the current iterate\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nε::R: convex combination of the linearization errors\nλ:::AbstractVector{<:R}: convex coefficients from the slution of the subproblem\nξ: the stopping parameter given by ξ = -lVert grvert^2 ε\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nConstructor\n\nConvexBundleMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)\nConvexBundleMethodState(M::AbstractManifold, sub_problem=convex_bundle_method_subsolver; evaluation=AllocatingEvaluation(), kwargs...)\n\nGenerate the state for the convex_bundle_method on the manifold M\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nsub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nKeyword arguments\n\nMost of the following keyword arguments set default values for the fields mentioned before.\n\natol_λ=eps()\natol_errors=eps()\nbundle_cap=25`\nm=1e-2\ndiameter=50.0\ndomain=(M, p) -> isfinite(f(M, p))\nk_max=0\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstepsize=default_stepsize(M, ConvexBundleMethodState): a functor inheriting from Stepsize to determine a step size\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p) specify the type of tangent vector to use.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/convex_bundle_method/#Stopping-criteria","page":"Convex bundle method","title":"Stopping criteria","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"StopWhenLagrangeMultiplierLess","category":"page"},{"location":"solvers/convex_bundle_method/#Manopt.StopWhenLagrangeMultiplierLess","page":"Convex bundle method","title":"Manopt.StopWhenLagrangeMultiplierLess","text":"StopWhenLagrangeMultiplierLess <: StoppingCriterion\n\nStopping Criteria for Lagrange multipliers.\n\nCurrently these are meant for the convex_bundle_method and proximal_bundle_method, where based on the Lagrange multipliers an approximate (sub)gradient g and an error estimate ε is computed.\n\nThe mode=:both requires that both ε and lvert g rvert are smaller than their tolerances for the convex_bundle_method, and that c and lvert d rvert are smaller than their tolerances for the proximal_bundle_method.\n\nThe mode=:estimate requires that, for the convex_bundle_method -ξ = lvert g rvert^2 + ε is less than a given tolerance. For the proximal_bundle_method, the equation reads -ν = μ lvert d rvert^2 + c.\n\nConstructors\n\nStopWhenLagrangeMultiplierLess(tolerance=1e-6; mode::Symbol=:estimate, names=nothing)\n\nCreate the stopping criterion for one of the modes mentioned. Note that tolerance can be a single number for the :estimate case, but a vector of two values is required for the :both mode. Here the first entry specifies the tolerance for ε (c), the second the tolerance for lvert g rvert (lvert d rvert), respectively.\n\n\n\n\n\n","category":"type"},{"location":"solvers/convex_bundle_method/#Debug-functions","page":"Convex bundle method","title":"Debug functions","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"DebugWarnIfLagrangeMultiplierIncreases","category":"page"},{"location":"solvers/convex_bundle_method/#Manopt.DebugWarnIfLagrangeMultiplierIncreases","page":"Convex bundle method","title":"Manopt.DebugWarnIfLagrangeMultiplierIncreases","text":"DebugWarnIfLagrangeMultiplierIncreases <: DebugAction\n\nprint a warning if the Lagrange parameter based value -ξ of the bundle method increases.\n\nConstructor\n\nDebugWarnIfLagrangeMultiplierIncreases(warn=:Once; tol=1e2)\n\nInitialize the warning to warning level (:Once) and introduce a tolerance for the test of 1e2.\n\nThe warn level can be set to :Once to only warn the first time the cost increases, to :Always to report an increase every time it happens, and it can be set to :No to deactivate the warning, then this DebugAction is inactive. All other symbols are handled as if they were :Always:\n\n\n\n\n\n","category":"type"},{"location":"solvers/convex_bundle_method/#Helpers-and-internal-functions","page":"Convex bundle method","title":"Helpers and internal functions","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"convex_bundle_method_subsolver\nDomainBackTrackingStepsize","category":"page"},{"location":"solvers/convex_bundle_method/#Manopt.convex_bundle_method_subsolver","page":"Convex bundle method","title":"Manopt.convex_bundle_method_subsolver","text":"λ = convex_bundle_method_subsolver(M, p_last_serious, linearization_errors, transported_subgradients)\nconvex_bundle_method_subsolver!(M, λ, p_last_serious, linearization_errors, transported_subgradients)\n\nsolver for the subproblem of the convex bundle method at the last serious iterate p_k given the current linearization errors c_j^k, and transported subgradients mathrmP_p_kq_j X_q_j.\n\nThe computation can also be done in-place of λ.\n\nThe subproblem for the convex bundle method is\n\nbeginalign*\n operatorname*argmin_λ ℝ^lvert J_krvert\n frac12 BigllVert sum_j J_k λ_j mathrmP_p_kq_j X_q_j BigrrVert^2\n + sum_j J_k λ_j c_j^k\n \n texts tquad \n sum_j J_k λ_j = 1\n quad λ_j 0\n quad textfor all \n j J_k\nendalign*\n\nwhere J_k = j J_k-1 λ_j 0 cup k. See [BHJ24] for more details\n\ntip: Tip\nA default subsolver based on RipQP.jl and QuadraticModels is available if these two packages are loaded.\n\n\n\n\n\n","category":"function"},{"location":"solvers/convex_bundle_method/#Manopt.DomainBackTrackingStepsize","page":"Convex bundle method","title":"Manopt.DomainBackTrackingStepsize","text":"DomainBackTrackingStepsize <: Stepsize\n\nImplement a backtrack as long as we are q = operatornameretr_p(X) yields a point closer to p than lVert X rVert_p or q is not on the domain. For the domain this step size requires a ConvexBundleMethodState\n\n\n\n\n\n","category":"type"},{"location":"solvers/convex_bundle_method/#Literature","page":"Convex bundle method","title":"Literature","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"R. Bergmann, R. Herzog and H. Jasa. The Riemannian Convex Bundle Method, preprint (2024), arXiv:2402.13670.\n\n\n\n","category":"page"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"EditURL = \"https://github.com/JuliaManifolds/Manopt.jl/blob/master/Changelog.md\"","category":"page"},{"location":"changelog/#Changelog","page":"Changelog","title":"Changelog","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"All notable Changes to the Julia package Manopt.jl will be documented in this file. The file was started with Version 0.4.","category":"page"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.","category":"page"},{"location":"changelog/#[0.5.4]-unreleased","page":"Changelog","title":"[0.5.4] - unreleased","text":"","category":"section"},{"location":"changelog/#Added","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"An automated detection whether the tutorials are present if not an also no quarto run is done, an automated --exlcude-tutorials option is added.","category":"page"},{"location":"changelog/#[0.5.3]-–-October-18,-2024","page":"Changelog","title":"[0.5.3] – October 18, 2024","text":"","category":"section"},{"location":"changelog/#Added-2","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"StopWhenChangeLess, StopWhenGradientChangeLess and StopWhenGradientLess can now use the new idea (ManifoldsBase.jl 0.15.18) of different outer norms on manifolds with components like power and product manifolds and all others that support this from the Manifolds.jl Library, like Euclidean","category":"page"},{"location":"changelog/#Changed","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"stabilize max_Stepzise to also work when injectivity_radius dos not exist. It however would warn new users, that activate tutorial mode.\nStart a ManoptTestSuite subpackage to store dummy types and common test helpers in.","category":"page"},{"location":"changelog/#[0.5.2]-–-October-5,-2024","page":"Changelog","title":"[0.5.2] – October 5, 2024","text":"","category":"section"},{"location":"changelog/#Added-3","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"three new symbols to easier state to record the :Gradient, the :GradientNorm, and the :Stepsize.","category":"page"},{"location":"changelog/#Changed-2","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fix a few typos in the documentation\nimproved the documentation for the initial guess of ArmijoLinesearchStepsize.","category":"page"},{"location":"changelog/#[0.5.1]-–-September-4,-2024","page":"Changelog","title":"[0.5.1] – September 4, 2024","text":"","category":"section"},{"location":"changelog/#Changed-3","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"slightly improves the test for the ExponentialFamilyProjection text on the about page.","category":"page"},{"location":"changelog/#Added-4","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the proximal_point method.","category":"page"},{"location":"changelog/#[0.5.0]-–-August-29,-2024","page":"Changelog","title":"[0.5.0] – August 29, 2024","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"This breaking update is mainly concerned with improving a unified experience through all solvers and some usability improvements, such that for example the different gradient update rules are easier to specify.","category":"page"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"In general we introduce a few factories, that avoid having to pass the manifold to keyword arguments","category":"page"},{"location":"changelog/#Added-5","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A ManifoldDefaultsFactory that postpones the creation/allocation of manifold-specific fields in for example direction updates, step sizes and stopping criteria. As a rule of thumb, internal structures, like a solver state should store the final type. Any high-level interface, like the functions to start solvers, should accept such a factory in the appropriate places and call the internal _produce_type(factory, M), for example before passing something to the state.\na documentation_glossary.jl file containing a glossary of often used variables in fields, arguments, and keywords, to print them in a unified manner. The same for usual sections, tex, and math notation that is often used within the doc-strings.","category":"page"},{"location":"changelog/#Changed-4","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Any Stepsize now hase a Stepsize struct used internally as the original structs before. The newly exported terms aim to fit stepsize=... in naming and create a ManifoldDefaultsFactory instead, so that any stepsize can be created without explicitly specifying the manifold.\nConstantStepsize is no longer exported, use ConstantLength instead. The length parameter is now a positional argument following the (optonal) manifold. Besides that ConstantLength works as before,just that omitting the manifold fills the one specified in the solver now.\nDecreasingStepsize is no longer exported, use DecreasingLength instead. ConstantLength works as before,just that omitting the manifold fills the one specified in the solver now.\nArmijoLinesearch is now called ArmijoLinesearchStepsize. ArmijoLinesearch works as before,just that omitting the manifold fills the one specified in the solver now.\nWolfePowellLinesearch is now called WolfePowellLinesearchStepsize, its constant c_1 is now unified with Armijo and called sufficient_decrease, c_2 was renamed to sufficient_curvature. Besides that, WolfePowellLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.\nWolfePowellBinaryLinesearch is now called WolfePowellBinaryLinesearchStepsize, its constant c_1 is now unified with Armijo and called sufficient_decrease, c_2 was renamed to sufficient_curvature. Besides that, WolfePowellBinaryLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.\nNonmonotoneLinesearch is now called NonmonotoneLinesearchStepsize. NonmonotoneLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.\nAdaptiveWNGradient is now called AdaptiveWNGradientStepsize. Its second positional argument, the gradient function was only evaluated once for the gradient_bound default, so it has been replaced by the keyword X= accepting a tangent vector. The last positional argument p has also been moved to a keyword argument. Besides that, AdaptiveWNGradient works as before, just that omitting the manifold fills the one specified in the solver now.\nAny DirectionUpdateRule now has the Rule in its name, since the original name is used to create the ManifoldDefaultsFactory instead. The original constructor now no longer requires the manifold as a parameter, that is later done in the factory. The Rule is, however, also no longer exported.\nAverageGradient is now called AverageGradientRule. AverageGradient works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.\nThe IdentityUpdateRule now accepts a manifold optionally for consistency, and you can use Gradient() for short as well as its factory. Hence direction=Gradient() is now available.\nMomentumGradient is now called MomentumGradientRule. MomentumGradient works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.\nNesterov is now called NesterovRule. Nesterov works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.\nConjugateDescentCoefficient is now called ConjugateDescentCoefficientRule. ConjugateDescentCoefficient works as before, but can now use the factory in between\nthe ConjugateGradientBealeRestart is now called ConjugateGradientBealeRestartRule. For the ConjugateGradientBealeRestart the manifold is now a first parameter, that is not necessary and no longer the manifold= keyword.\nDaiYuanCoefficient is now called DaiYuanCoefficientRule. For the DaiYuanCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.\nFletcherReevesCoefficient is now called FletcherReevesCoefficientRule. FletcherReevesCoefficient works as before, but can now use the factory in between\nHagerZhangCoefficient is now called HagerZhangCoefficientRule. For the HagerZhangCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.\nHestenesStiefelCoefficient is now called HestenesStiefelCoefficientRule. For the HestenesStiefelCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.\nLiuStoreyCoefficient is now called LiuStoreyCoefficientRule. For the LiuStoreyCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.\nPolakRibiereCoefficient is now called PolakRibiereCoefficientRule. For the PolakRibiereCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.\nthe SteepestDirectionUpdateRule is now called SteepestDescentCoefficientRule. The SteepestDescentCoefficient is equivalent, but creates the new factory interims wise.\nAbstractGradientGroupProcessor is now called AbstractGradientGroupDirectionRule\nthe StochasticGradient is now called StochasticGradientRule. The StochasticGradient is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.\nthe AlternatingGradient is now called AlternatingGradientRule.\nThe AlternatingGradient is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.\nquasi_Newton had a keyword scale_initial_operator= that was inconsistently declared (sometimes bool, sometimes real) and was unused. It is now called initial_scale=1.0 and scales the initial (diagonal, unit) matrix within the approximation of the Hessian additionally to the frac1lVert g_krVert scaling with the norm of the oldest gradient for the limited memory variant. For the full matrix variant the initial identity matrix is now scaled with this parameter.\nUnify doc strings and presentation of keyword arguments\ngeneral indexing, for example in a vector, uses i\nindex for inequality constraints is unified to i running from 1,...,m\nindex for equality constraints is unified to j running from 1,...,n\niterations are using now k\nget_manopt_parameter has been renamed to get_parameter since it is internal, so internally that is clear; accessing it from outside hence reads anyways Manopt.get_parameter\nset_manopt_parameter! has been renamed to set_parameter! since it is internal, so internally that is clear; accessing it from outside hence reads Manopt.set_parameter!\nchanged the stabilize::Bool= keyword in quasi_Newton to the more flexible project!= keyword, this is also more in line with the other solvers. Internally the same is done within the QuasiNewtonLimitedMemoryDirectionUpdate. To adapt,\nthe previous stabilize=true is now set with (project!)=embed_project! in general, and if the manifold is represented by points in the embedding, like the sphere, (project!)=project! suffices\nthe new default is (project!)=copyto!, so by default no projection/stabilization is performed.\nthe positional argument p (usually the last or the third to last if subsolvers existed) has been moved to a keyword argument p= in all State constructors\nin NelderMeadState the population moved from positional to keyword argument as well,\nthe way to initialise sub solvers in the solver states has been unified In the new variant\nthe sub_problem is always a positional argument; namely the last one\nif the sub_state is given as a optional positional argument after the problem, it has to be a manopt solver state\nyou can provide the new ClosedFormSolverState(e::AbstractEvaluationType) for the state to indicate that the sub_problem is a closed form solution (function call) and how it has to be called\nif you do not provide the sub_state as positional, the keyword evaluation= is used to generate the state ClosedFormSolverState.\nwhen previously p and eventually X where positional arguments, they are now moved to keyword arguments of the same name for start point and tangent vector.\nin detail\nAdaptiveRegularizationState(M, sub_problem [, sub_state]; kwargs...) replaces the (anyways unused) variant to only provide the objective; both X and p moved to keyword arguments.\nAugmentedLagrangianMethodState(M, objective, sub_problem; evaluation=...) was added\n`AugmentedLagrangianMethodState(M, objective, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one\nExactPenaltyMethodState(M, sub_problem; evaluation=...) was added and ExactPenaltyMethodState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one\nDifferenceOfConvexState(M, sub_problem; evaluation=...) was added and DifferenceOfConvexState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one\nDifferenceOfConvexProximalState(M, sub_problem; evaluation=...) was added and DifferenceOfConvexProximalState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one\nbumped Manifolds.jlto version 0.10; this mainly means that any algorithm working on a productmanifold and requiring ArrayPartition now has to explicitly do using RecursiveArrayTools.","category":"page"},{"location":"changelog/#Fixed","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the AverageGradientRule filled its internal vector of gradients wrongly – or mixed it up in parallel transport. This is now fixed.","category":"page"},{"location":"changelog/#Removed","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the convex_bundle_method and its ConvexBundleMethodState no longer accept the keywords k_size, p_estimate nor ϱ, they are superseded by just providing k_max.\nthe truncated_conjugate_gradient_descent(M, f, grad_f, hess_f) has the Hessian now a mandatory argument. To use the old variant, provide ApproxHessianFiniteDifference(M, copy(M, p), grad_f) to hess_f directly.\nall deprecated keyword arguments and a few function signatures were removed:\nget_equality_constraints, get_equality_constraints!, get_inequality_constraints, get_inequality_constraints! are removed. Use their singular forms and set the index to : instead.\nStopWhenChangeLess(ε) is removed, use `StopWhenChangeLess(M, ε) instead to fill for example the retraction properly used to determine the change\nIn the WolfePowellLinesearch and WolfeBinaryLinesearchthe linesearch_stopsize= keyword is replaced by stop_when_stepsize_less=\nDebugChange and RecordChange had a manifold= and a invretr keyword that were replaced by the first positional argument M and inverse_retraction_method=, respectively\nin the NonlinearLeastSquaresObjective and LevenbergMarquardt the jacB= keyword is now called jacobian_tangent_basis=\nin particle_swarm the n= keyword is replaced by swarm_size=.\nupdate_stopping_criterion! has been removed and unified with set_parameter!. The code adaptions are\nto set a parameter of a stopping criterion, just replace update_stopping_criterion!(sc, :Val, v) with set_parameter!(sc, :Val, v)\nto update a stopping criterion in a solver state, replace the old update_stopping_criterion!(state, :Val, v) tat passed down to the stopping criterion by the explicit pass down with set_parameter!(state, :StoppingCriterion, :Val, v)","category":"page"},{"location":"changelog/#[0.4.69]-–-August-3,-2024","page":"Changelog","title":"[0.4.69] – August 3, 2024","text":"","category":"section"},{"location":"changelog/#Changed-5","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Improved performance of Interior Point Newton Method.","category":"page"},{"location":"changelog/#[0.4.68]-–-August-2,-2024","page":"Changelog","title":"[0.4.68] – August 2, 2024","text":"","category":"section"},{"location":"changelog/#Added-6","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"an Interior Point Newton Method, the interior_point_newton\na conjugate_residual Algorithm to solve a linear system on a tangent space.\nArmijoLinesearch now allows for additional additional_decrease_condition and additional_increase_condition keywords to add further conditions to accept additional conditions when to accept an decreasing or increase of the stepsize.\nadd a DebugFeasibility to have a debug print about feasibility of points in constrained optimisation employing the new is_feasible function\nadd a InteriorPointCentralityCondition check that can be added for step candidates within the line search of interior_point_newton\nAdd Several new functors\nthe LagrangianCost, LagrangianGradient, LagrangianHessian, that based on a constrained objective allow to construct the hessian objective of its Lagrangian\nthe CondensedKKTVectorField and its CondensedKKTVectorFieldJacobian, that are being used to solve a linear system within interior_point_newton\nthe KKTVectorField as well as its KKTVectorFieldJacobian and `KKTVectorFieldAdjointJacobian\nthe KKTVectorFieldNormSq and its KKTVectorFieldNormSqGradient used within the Armijo line search of interior_point_newton\nNew stopping criteria\nA StopWhenRelativeResidualLess for the conjugate_residual\nA StopWhenKKTResidualLess for the interior_point_newton","category":"page"},{"location":"changelog/#[0.4.67]-–-July-25,-2024","page":"Changelog","title":"[0.4.67] – July 25, 2024","text":"","category":"section"},{"location":"changelog/#Added-7","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"max_stepsize methods for Hyperrectangle.","category":"page"},{"location":"changelog/#Fixed-2","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"a few typos in the documentation\nWolfePowellLinesearch no longer uses max_stepsize with invalid point by default.","category":"page"},{"location":"changelog/#[0.4.66]-June-27,-2024","page":"Changelog","title":"[0.4.66] June 27, 2024","text":"","category":"section"},{"location":"changelog/#Changed-6","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Remove functions estimate_sectional_curvature, ζ_1, ζ_2, close_point from convex_bundle_method\nRemove some unused fields and arguments such as p_estimate, ϱ, α, from ConvexBundleMethodState in favor of jut k_max\nChange parameter R placement in ProximalBundleMethodState to fifth position","category":"page"},{"location":"changelog/#[0.4.65]-June-13,-2024","page":"Changelog","title":"[0.4.65] June 13, 2024","text":"","category":"section"},{"location":"changelog/#Changed-7","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"refactor stopping criteria to not store a sc.reason internally, but instead only generate the reason (and hence allocate a string) when actually asked for a reason.","category":"page"},{"location":"changelog/#[0.4.64]-June-4,-2024","page":"Changelog","title":"[0.4.64] June 4, 2024","text":"","category":"section"},{"location":"changelog/#Added-8","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Remodel the constraints and their gradients into separate VectorGradientFunctions to reduce code duplication and encapsulate the inner model of these functions and their gradients\nIntroduce a ConstrainedManoptProblem to model different ranges for the gradients in the new VectorGradientFunctions beyond the default NestedPowerRepresentation\nintroduce a VectorHessianFunction to also model that one can provide the vector of Hessians to constraints\nintroduce a more flexible indexing beyond single indexing, to also include arbitrary ranges when accessing vector functions and their gradients and hence also for constraints and their gradients.","category":"page"},{"location":"changelog/#Changed-8","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Remodel ConstrainedManifoldObjective to store an AbstractManifoldObjective internally instead of directly f and grad_f, allowing also Hessian objectives therein and implementing access to this Hessian\nFixed a bug that Lanczos produced NaNs when started exactly in a minimizer, since we divide by the gradient norm.","category":"page"},{"location":"changelog/#Deprecated","page":"Changelog","title":"Deprecated","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"deprecate get_grad_equality_constraints(M, o, p), use get_grad_equality_constraint(M, o, p, :) from the more flexible indexing instead.","category":"page"},{"location":"changelog/#[0.4.63]-May-11,-2024","page":"Changelog","title":"[0.4.63] May 11, 2024","text":"","category":"section"},{"location":"changelog/#Added-9","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":":reinitialize_direction_update option for quasi-Newton behavior when the direction is not a descent one. It is now the new default for QuasiNewtonState.\nQuasi-Newton direction update rules are now initialized upon start of the solver with the new internal function initialize_update!.","category":"page"},{"location":"changelog/#Fixed-3","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"ALM and EPM no longer keep a part of the quasi-Newton subsolver state between runs.","category":"page"},{"location":"changelog/#Changed-9","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Quasi-Newton solvers: :reinitialize_direction_update is the new default behavior in case of detection of non-descent direction instead of :step_towards_negative_gradient. :step_towards_negative_gradient is still available when explicitly set using the nondescent_direction_behavior keyword argument.","category":"page"},{"location":"changelog/#[0.4.62]-May-3,-2024","page":"Changelog","title":"[0.4.62] May 3, 2024","text":"","category":"section"},{"location":"changelog/#Changed-10","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"bumped dependency of ManifoldsBase.jl to 0.15.9 and imported their numerical verify functions. This changes the throw_error keyword used internally to a error= with a symbol.","category":"page"},{"location":"changelog/#[0.4.61]-April-27,-2024","page":"Changelog","title":"[0.4.61] April 27, 2024","text":"","category":"section"},{"location":"changelog/#Added-10","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Tests use Aqua.jl to spot problems in the code\nintroduce a feature-based list of solvers and reduce the details in the alphabetical list\nadds a PolyakStepsize\nadded a get_subgradient for AbstractManifoldGradientObjectives since their gradient is a special case of a subgradient.","category":"page"},{"location":"changelog/#Fixed-4","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"get_last_stepsize was defined in quite different ways that caused ambiguities. That is now internally a bit restructured and should work nicer. Internally this means that the interim dispatch on get_last_stepsize(problem, state, step, vars...) was removed. Now the only two left are get_last_stepsize(p, s, vars...) and the one directly checking get_last_stepsize(::Stepsize) for stored values.\nthe accidentally exported set_manopt_parameter! is no longer exported","category":"page"},{"location":"changelog/#Changed-11","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"get_manopt_parameter and set_manopt_parameter! have been revised and better documented, they now use more semantic symbols (with capital letters) instead of direct field access (lower letter symbols). Since these are not exported, this is considered an internal, hence non-breaking change.\nsemantic symbols are now all nouns in upper case letters\n:active is changed to :Activity","category":"page"},{"location":"changelog/#[0.4.60]-April-10,-2024","page":"Changelog","title":"[0.4.60] April 10, 2024","text":"","category":"section"},{"location":"changelog/#Added-11","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"RecordWhenActive to allow records to be deactivated during runtime, symbol :WhenActive\nRecordSubsolver to record the result of a subsolver recording in the main solver, symbol :Subsolver\nRecordStoppingReason to record the reason a solver stopped\nmade the RecordFactory more flexible and quite similar to DebugFactory, such that it is now also easy to specify recordings at the end of solver runs. This can especially be used to record final states of sub solvers.","category":"page"},{"location":"changelog/#Changed-12","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"being a bit more strict with internal tools and made the factories for record non-exported, so this is the same as for debug.","category":"page"},{"location":"changelog/#Fixed-5","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The name :Subsolver to generate DebugWhenActive was misleading, it is now called :WhenActive referring to “print debug only when set active, that is by the parent (main) solver”.\nthe old version of specifying Symbol => RecordAction for later access was ambiguous, since","category":"page"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"it could also mean to store the action in the dictionary under that symbol. Hence the order for access was switched to RecordAction => Symbol to resolve that ambiguity.","category":"page"},{"location":"changelog/#[0.4.59]-April-7,-2024","page":"Changelog","title":"[0.4.59] April 7, 2024","text":"","category":"section"},{"location":"changelog/#Added-12","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A Riemannian variant of the CMA-ES (Covariance Matrix Adaptation Evolutionary Strategy) algorithm, cma_es.","category":"page"},{"location":"changelog/#Fixed-6","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The constructor dispatch for StopWhenAny with Vector had incorrect element type assertion which was fixed.","category":"page"},{"location":"changelog/#[0.4.58]-March-18,-2024","page":"Changelog","title":"[0.4.58] March 18, 2024","text":"","category":"section"},{"location":"changelog/#Added-13","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"more advanced methods to add debug to the beginning of an algorithm, a step, or the end of the algorithm with DebugAction entries at :Start, :BeforeIteration, :Iteration, and :Stop, respectively.\nIntroduce a Pair-based format to add elements to these hooks, while all others ar now added to :Iteration (no longer to :All)\n(planned) add an easy possibility to also record the initial stage and not only after the first iteration.","category":"page"},{"location":"changelog/#Changed-13","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Changed the symbol for the :Step dictionary to be :Iteration, to unify this with the symbols used in recording, and removed the :All symbol. On the fine granular scale, all but :Start debugs are now reset on init. Since these are merely internal entries in the debug dictionary, this is considered non-breaking.\nintroduce a StopWhenSwarmVelocityLess stopping criterion for particle_swarm replacing the current default of the swarm change, since this is a bit more effective to compute","category":"page"},{"location":"changelog/#Fixed-7","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixed the outdated documentation of TruncatedConjugateGradientState, that now correctly state that p is no longer stored, but the algorithm runs on TpM.\nimplemented the missing get_iterate for TruncatedConjugateGradientState.","category":"page"},{"location":"changelog/#[0.4.57]-March-15,-2024","page":"Changelog","title":"[0.4.57] March 15, 2024","text":"","category":"section"},{"location":"changelog/#Changed-14","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"convex_bundle_method uses the sectional_curvature from ManifoldsBase.jl.\nconvex_bundle_method no longer has the unused k_min keyword argument.\nManifoldsBase.jl now is running on Documenter 1.3, Manopt.jl documentation now uses DocumenterInterLinks to refer to sections and functions from ManifoldsBase.jl","category":"page"},{"location":"changelog/#Fixed-8","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixes a type that when passing sub_kwargs to trust_regions caused an error in the decoration of the sub objective.","category":"page"},{"location":"changelog/#[0.4.56]-March-4,-2024","page":"Changelog","title":"[0.4.56] March 4, 2024","text":"","category":"section"},{"location":"changelog/#Added-14","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The option :step_towards_negative_gradient for nondescent_direction_behavior in quasi-Newton solvers does no longer emit a warning by default. This has been moved to a message, that can be accessed/displayed with DebugMessages\nDebugMessages now has a second positional argument, specifying whether all messages, or just the first (:Once) should be displayed.","category":"page"},{"location":"changelog/#[0.4.55]-March-3,-2024","page":"Changelog","title":"[0.4.55] March 3, 2024","text":"","category":"section"},{"location":"changelog/#Added-15","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Option nondescent_direction_behavior for quasi-Newton solvers. By default it checks for non-descent direction which may not be handled well by some stepsize selection algorithms.","category":"page"},{"location":"changelog/#Fixed-9","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"unified documentation, especially function signatures further.\nfixed a few typos related to math formulae in the doc strings.","category":"page"},{"location":"changelog/#[0.4.54]-February-28,-2024","page":"Changelog","title":"[0.4.54] February 28, 2024","text":"","category":"section"},{"location":"changelog/#Added-16","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"convex_bundle_method optimization algorithm for non-smooth geodesically convex functions\nproximal_bundle_method optimization algorithm for non-smooth functions.\nStopWhenSubgradientNormLess, StopWhenLagrangeMultiplierLess, and stopping criteria.","category":"page"},{"location":"changelog/#Fixed-10","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Doc strings now follow a vale.sh policy. Though this is not fully working, this PR improves a lot of the doc strings concerning wording and spelling.","category":"page"},{"location":"changelog/#[0.4.53]-February-13,-2024","page":"Changelog","title":"[0.4.53] February 13, 2024","text":"","category":"section"},{"location":"changelog/#Fixed-11","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixes two storage action defaults, that accidentally still tried to initialize a :Population (as modified back to :Iterate 0.4.49).\nfix a few typos in the documentation and add a reference for the subgradient method.","category":"page"},{"location":"changelog/#[0.4.52]-February-5,-2024","page":"Changelog","title":"[0.4.52] February 5, 2024","text":"","category":"section"},{"location":"changelog/#Added-17","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"introduce an environment persistent way of setting global values with the set_manopt_parameter! function using Preferences.jl.\nintroduce such a value named :Mode to enable a \"Tutorial\" mode that shall often provide more warnings and information for people getting started with optimisation on manifolds","category":"page"},{"location":"changelog/#[0.4.51]-January-30,-2024","page":"Changelog","title":"[0.4.51] January 30, 2024","text":"","category":"section"},{"location":"changelog/#Added-18","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A StopWhenSubgradientNormLess stopping criterion for subgradient-based optimization.\nAllow the message= of the DebugIfEntry debug action to contain a format element to print the field in the message as well.","category":"page"},{"location":"changelog/#[0.4.50]-January-26,-2024","page":"Changelog","title":"[0.4.50] January 26, 2024","text":"","category":"section"},{"location":"changelog/#Fixed-12","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Fix Quasi Newton on complex manifolds.","category":"page"},{"location":"changelog/#[0.4.49]-January-18,-2024","page":"Changelog","title":"[0.4.49] January 18, 2024","text":"","category":"section"},{"location":"changelog/#Added-19","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A StopWhenEntryChangeLess to be able to stop on arbitrary small changes of specific fields\ngeneralises StopWhenGradientNormLess to accept arbitrary norm= functions\nrefactor the default in particle_swarm to no longer “misuse” the iteration change, but actually the new one the :swarm entry","category":"page"},{"location":"changelog/#[0.4.48]-January-16,-2024","page":"Changelog","title":"[0.4.48] January 16, 2024","text":"","category":"section"},{"location":"changelog/#Fixed-13","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixes an imprecision in the interface of get_iterate that sometimes led to the swarm of particle_swarm being returned as the iterate.\nrefactor particle_swarm in naming and access functions to avoid this also in the future. To access the whole swarm, one now should use get_manopt_parameter(pss, :Population)","category":"page"},{"location":"changelog/#[0.4.47]-January-6,-2024","page":"Changelog","title":"[0.4.47] January 6, 2024","text":"","category":"section"},{"location":"changelog/#Fixed-14","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixed a bug, where the retraction set in check_Hessian was not passed on to the optional inner check_gradient call, which could lead to unwanted side effects, see #342.","category":"page"},{"location":"changelog/#[0.4.46]-January-1,-2024","page":"Changelog","title":"[0.4.46] January 1, 2024","text":"","category":"section"},{"location":"changelog/#Changed-15","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"An error is thrown when a line search from LineSearches.jl reports search failure.\nChanged default stopping criterion in ALM algorithm to mitigate an issue occurring when step size is very small.\nDefault memory length in default ALM subsolver is now capped at manifold dimension.\nReplaced CI testing on Julia 1.8 with testing on Julia 1.10.","category":"page"},{"location":"changelog/#Fixed-15","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A bug in LineSearches.jl extension leading to slower convergence.\nFixed a bug in L-BFGS related to memory storage, which caused significantly slower convergence.","category":"page"},{"location":"changelog/#[0.4.45]-December-28,-2023","page":"Changelog","title":"[0.4.45] December 28, 2023","text":"","category":"section"},{"location":"changelog/#Added-20","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Introduce sub_kwargs and sub_stopping_criterion for trust_regions as noticed in #336","category":"page"},{"location":"changelog/#Changed-16","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"WolfePowellLineSearch, ArmijoLineSearch step sizes now allocate less\nlinesearch_backtrack! is now available\nQuasi Newton Updates can work in-place of a direction vector as well.\nFaster safe_indices in L-BFGS.","category":"page"},{"location":"changelog/#[0.4.44]-December-12,-2023","page":"Changelog","title":"[0.4.44] December 12, 2023","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Formally one could consider this version breaking, since a few functions have been moved, that in earlier versions (0.3.x) have been used in example scripts. These examples are now available again within ManoptExamples.jl, and with their “reappearance” the corresponding costs, gradients, differentials, adjoint differentials, and proximal maps have been moved there as well. This is not considered breaking, since the functions were only used in the old, removed examples. Each and every moved function is still documented. They have been partly renamed, and their documentation and testing has been extended.","category":"page"},{"location":"changelog/#Changed-17","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Bumped and added dependencies on all 3 Project.toml files, the main one, the docs/, an the tutorials/ one.\nartificial_S2_lemniscate is available as ManoptExample.Lemniscate and works on arbitrary manifolds now.\nartificial_S1_signal is available as ManoptExample.artificial_S1_signal\nartificial_S1_slope_signal is available as ManoptExamples.artificial_S1_slope_signal\nartificial_S2_composite_bezier_curve is available as ManoptExamples.artificial_S2_composite_Bezier_curve\nartificial_S2_rotation_image is available as ManoptExamples.artificial_S2_rotation_image\nartificial_S2_whirl_image is available as ManoptExamples.artificial_S2_whirl_image\nartificial_S2_whirl_patch is available as ManoptExamples.artificial_S2_whirl_path\nartificial_SAR_image is available as ManoptExamples.artificial_SAR_image\nartificial_SPD_image is available as ManoptExamples.artificial_SPD_image\nartificial_SPD_image2 is available as ManoptExamples.artificial_SPD_image\nadjoint_differential_forward_logs is available as ManoptExamples.adjoint_differential_forward_logs\nadjoint:differential_bezier_control is available as ManoptExamples.adjoint_differential_Bezier_control_points\nBezierSegment is available as ManoptExamples.BeziérSegment\ncost_acceleration_bezier is available as ManoptExamples.acceleration_Bezier\ncost_L2_acceleration_bezier is available as ManoptExamples.L2_acceleration_Bezier\ncostIntrICTV12 is available as ManoptExamples.Intrinsic_infimal_convolution_TV12\ncostL2TV is available as ManoptExamples.L2_Total_Variation\ncostL2TV12 is available as ManoptExamples.L2_Total_Variation_1_2\ncostL2TV2 is available as ManoptExamples.L2_second_order_Total_Variation\ncostTV is available as ManoptExamples.Total_Variation\ncostTV2 is available as ManoptExamples.second_order_Total_Variation\nde_casteljau is available as ManoptExamples.de_Casteljau\ndifferential_forward_logs is available as ManoptExamples.differential_forward_logs\ndifferential_bezier_control is available as ManoptExamples.differential_Bezier_control_points\nforward_logs is available as ManoptExamples.forward_logs\nget_bezier_degree is available as ManoptExamples.get_Bezier_degree\nget_bezier_degrees is available as ManoptExamples.get_Bezier_degrees\nget_Bezier_inner_points is available as ManoptExamples.get_Bezier_inner_points\nget_bezier_junction_tangent_vectors is available as ManoptExamples.get_Bezier_junction_tangent_vectors\nget_bezier_junctions is available as ManoptExamples.get_Bezier_junctions\nget_bezier_points is available as ManoptExamples.get_Bezier_points\nget_bezier_segments is available as ManoptExamples.get_Bezier_segments\ngrad_acceleration_bezier is available as ManoptExamples.grad_acceleration_Bezier\ngrad_L2_acceleration_bezier is available as ManoptExamples.grad_L2_acceleration_Bezier\ngrad_Intrinsic_infimal_convolution_TV12 is available as ManoptExamples.Intrinsic_infimal_convolution_TV12\ngrad_TV is available as ManoptExamples.grad_Total_Variation\ncostIntrICTV12 is available as ManoptExamples.Intrinsic_infimal_convolution_TV12\nproject_collaborative_TV is available as ManoptExamples.project_collaborative_TV\nprox_parallel_TV is available as ManoptExamples.prox_parallel_TV\ngrad_TV2 is available as ManoptExamples.prox_second_order_Total_Variation\nprox_TV is available as ManoptExamples.prox_Total_Variation\nprox_TV2 is available as ManopExamples.prox_second_order_Total_Variation","category":"page"},{"location":"changelog/#[0.4.43]-November-19,-2023","page":"Changelog","title":"[0.4.43] November 19, 2023","text":"","category":"section"},{"location":"changelog/#Added-21","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"vale.sh as a CI to keep track of a consistent documentation","category":"page"},{"location":"changelog/#[0.4.42]-November-6,-2023","page":"Changelog","title":"[0.4.42] November 6, 2023","text":"","category":"section"},{"location":"changelog/#Added-22","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"add Manopt.JuMP_Optimizer implementing JuMP's solver interface","category":"page"},{"location":"changelog/#[0.4.41]-November-2,-2023","page":"Changelog","title":"[0.4.41] November 2, 2023","text":"","category":"section"},{"location":"changelog/#Changed-18","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"trust_regions is now more flexible and the sub solver (Steihaug-Toint tCG by default) can now be exchanged.\nadaptive_regularization_with_cubics is now more flexible as well, where it previously was a bit too much tightened to the Lanczos solver as well.\nUnified documentation notation and bumped dependencies to use DocumenterCitations 1.3","category":"page"},{"location":"changelog/#[0.4.40]-October-24,-2023","page":"Changelog","title":"[0.4.40] October 24, 2023","text":"","category":"section"},{"location":"changelog/#Added-23","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"add a --help argument to docs/make.jl to document all available command line arguments\nadd a --exclude-tutorials argument to docs/make.jl. This way, when quarto is not available on a computer, the docs can still be build with the tutorials not being added to the menu such that documenter does not expect them to exist.","category":"page"},{"location":"changelog/#Changes","page":"Changelog","title":"Changes","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Bump dependencies to ManifoldsBase.jl 0.15 and Manifolds.jl 0.9\nmove the ARC CG subsolver to the main package, since TangentSpace is now already available from ManifoldsBase.","category":"page"},{"location":"changelog/#[0.4.39]-October-9,-2023","page":"Changelog","title":"[0.4.39] October 9, 2023","text":"","category":"section"},{"location":"changelog/#Changes-2","page":"Changelog","title":"Changes","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"also use the pair of a retraction and the inverse retraction (see last update) to perform the relaxation within the Douglas-Rachford algorithm.","category":"page"},{"location":"changelog/#[0.4.38]-October-8,-2023","page":"Changelog","title":"[0.4.38] October 8, 2023","text":"","category":"section"},{"location":"changelog/#Changes-3","page":"Changelog","title":"Changes","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"avoid allocations when calling get_jacobian! within the Levenberg-Marquard Algorithm.","category":"page"},{"location":"changelog/#Fixed-16","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Fix a lot of typos in the documentation","category":"page"},{"location":"changelog/#[0.4.37]-September-28,-2023","page":"Changelog","title":"[0.4.37] September 28, 2023","text":"","category":"section"},{"location":"changelog/#Changes-4","page":"Changelog","title":"Changes","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"add more of the Riemannian Levenberg-Marquard algorithms parameters as keywords, so they can be changed on call\ngeneralize the internal reflection of Douglas-Rachford, such that is also works with an arbitrary pair of a reflection and an inverse reflection.","category":"page"},{"location":"changelog/#[0.4.36]-September-20,-2023","page":"Changelog","title":"[0.4.36] September 20, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-17","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Fixed a bug that caused non-matrix points and vectors to fail when working with approximate","category":"page"},{"location":"changelog/#[0.4.35]-September-14,-2023","page":"Changelog","title":"[0.4.35] September 14, 2023","text":"","category":"section"},{"location":"changelog/#Added-24","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The access to functions of the objective is now unified and encapsulated in proper get_ functions.","category":"page"},{"location":"changelog/#[0.4.34]-September-02,-2023","page":"Changelog","title":"[0.4.34] September 02, 2023","text":"","category":"section"},{"location":"changelog/#Added-25","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"an ManifoldEuclideanGradientObjective to allow the cost, gradient, and Hessian and other first or second derivative based elements to be Euclidean and converted when needed.\na keyword objective_type=:Euclidean for all solvers, that specifies that an Objective shall be created of the new type","category":"page"},{"location":"changelog/#[0.4.33]-August-24,-2023","page":"Changelog","title":"[0.4.33] August 24, 2023","text":"","category":"section"},{"location":"changelog/#Added-26","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"ConstantStepsize and DecreasingStepsize now have an additional field type::Symbol to assess whether the step-size should be relatively (to the gradient norm) or absolutely constant.","category":"page"},{"location":"changelog/#[0.4.32]-August-23,-2023","page":"Changelog","title":"[0.4.32] August 23, 2023","text":"","category":"section"},{"location":"changelog/#Added-27","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The adaptive regularization with cubics (ARC) solver.","category":"page"},{"location":"changelog/#[0.4.31]-August-14,-2023","page":"Changelog","title":"[0.4.31] August 14, 2023","text":"","category":"section"},{"location":"changelog/#Added-28","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A :Subsolver keyword in the debug= keyword argument, that activates the new DebugWhenActiveto de/activate subsolver debug from the main solversDebugEvery`.","category":"page"},{"location":"changelog/#[0.4.30]-August-3,-2023","page":"Changelog","title":"[0.4.30] August 3, 2023","text":"","category":"section"},{"location":"changelog/#Changed-19","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"References in the documentation are now rendered using DocumenterCitations.jl\nAsymptote export now also accepts a size in pixel instead of its default 4cm size and render can be deactivated setting it to nothing.","category":"page"},{"location":"changelog/#[0.4.29]-July-12,-2023","page":"Changelog","title":"[0.4.29] July 12, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-18","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixed a bug, where cyclic_proximal_point did not work with decorated objectives.","category":"page"},{"location":"changelog/#[0.4.28]-June-24,-2023","page":"Changelog","title":"[0.4.28] June 24, 2023","text":"","category":"section"},{"location":"changelog/#Changed-20","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"max_stepsize was specialized for FixedRankManifold to follow Matlab Manopt.","category":"page"},{"location":"changelog/#[0.4.27]-June-15,-2023","page":"Changelog","title":"[0.4.27] June 15, 2023","text":"","category":"section"},{"location":"changelog/#Added-29","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The AdaptiveWNGrad stepsize is available as a new stepsize functor.","category":"page"},{"location":"changelog/#Fixed-19","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Levenberg-Marquardt now possesses its parameters initial_residual_values and initial_jacobian_f also as keyword arguments, such that their default initialisations can be adapted, if necessary","category":"page"},{"location":"changelog/#[0.4.26]-June-11,-2023","page":"Changelog","title":"[0.4.26] June 11, 2023","text":"","category":"section"},{"location":"changelog/#Added-30","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"simplify usage of gradient descent as sub solver in the DoC solvers.\nadd a get_state function\ndocument indicates_convergence.","category":"page"},{"location":"changelog/#[0.4.25]-June-5,-2023","page":"Changelog","title":"[0.4.25] June 5, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-20","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Fixes an allocation bug in the difference of convex algorithm","category":"page"},{"location":"changelog/#[0.4.24]-June-4,-2023","page":"Changelog","title":"[0.4.24] June 4, 2023","text":"","category":"section"},{"location":"changelog/#Added-31","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"another workflow that deletes old PR renderings from the docs to keep them smaller in overall size.","category":"page"},{"location":"changelog/#Changes-5","page":"Changelog","title":"Changes","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"bump dependencies since the extension between Manifolds.jl and ManifoldsDiff.jl has been moved to Manifolds.jl","category":"page"},{"location":"changelog/#[0.4.23]-June-4,-2023","page":"Changelog","title":"[0.4.23] June 4, 2023","text":"","category":"section"},{"location":"changelog/#Added-32","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"More details on the Count and Cache tutorial","category":"page"},{"location":"changelog/#Changed-21","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"loosen constraints slightly","category":"page"},{"location":"changelog/#[0.4.22]-May-31,-2023","page":"Changelog","title":"[0.4.22] May 31, 2023","text":"","category":"section"},{"location":"changelog/#Added-33","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A tutorial on how to implement a solver","category":"page"},{"location":"changelog/#[0.4.21]-May-22,-2023","page":"Changelog","title":"[0.4.21] May 22, 2023","text":"","category":"section"},{"location":"changelog/#Added-34","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A ManifoldCacheObjective as a decorator for objectives to cache results of calls, using LRU Caches as a weak dependency. For now this works with cost and gradient evaluations\nA ManifoldCountObjective as a decorator for objectives to enable counting of calls to for example the cost and the gradient\nadds a return_objective keyword, that switches the return of a solver to a tuple (o, s), where o is the (possibly decorated) objective, and s is the “classical” solver return (state or point). This way the counted values can be accessed and the cache can be reused.\nchange solvers on the mid level (form solver(M, objective, p)) to also accept decorated objectives","category":"page"},{"location":"changelog/#Changed-22","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Switch all Requires weak dependencies to actual weak dependencies starting in Julia 1.9","category":"page"},{"location":"changelog/#[0.4.20]-May-11,-2023","page":"Changelog","title":"[0.4.20] May 11, 2023","text":"","category":"section"},{"location":"changelog/#Changed-23","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the default tolerances for the numerical check_ functions were loosened a bit, such that check_vector can also be changed in its tolerances.","category":"page"},{"location":"changelog/#[0.4.19]-May-7,-2023","page":"Changelog","title":"[0.4.19] May 7, 2023","text":"","category":"section"},{"location":"changelog/#Added-35","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the sub solver for trust_regions is now customizable and can now be exchanged.","category":"page"},{"location":"changelog/#Changed-24","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"slightly changed the definitions of the solver states for ALM and EPM to be type stable","category":"page"},{"location":"changelog/#[0.4.18]-May-4,-2023","page":"Changelog","title":"[0.4.18] May 4, 2023","text":"","category":"section"},{"location":"changelog/#Added-36","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A function check_Hessian(M, f, grad_f, Hess_f) to numerically verify the (Riemannian) Hessian of a function f","category":"page"},{"location":"changelog/#[0.4.17]-April-28,-2023","page":"Changelog","title":"[0.4.17] April 28, 2023","text":"","category":"section"},{"location":"changelog/#Added-37","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A new interface of the form alg(M, objective, p0) to allow to reuse objectives without creating AbstractManoptSolverStates and calling solve!. This especially still allows for any decoration of the objective and/or the state using debug=, or record=.","category":"page"},{"location":"changelog/#Changed-25","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"All solvers now have the initial point p as an optional parameter making it more accessible to first time users, gradient_descent(M, f, grad_f) is equivalent to gradient_descent(M, f, grad_f, rand(M))","category":"page"},{"location":"changelog/#Fixed-21","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Unified the framework to work on manifold where points are represented by numbers for several solvers","category":"page"},{"location":"changelog/#[0.4.16]-April-18,-2023","page":"Changelog","title":"[0.4.16] April 18, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-22","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the inner products used in truncated_gradient_descent now also work thoroughly on complex matrix manifolds","category":"page"},{"location":"changelog/#[0.4.15]-April-13,-2023","page":"Changelog","title":"[0.4.15] April 13, 2023","text":"","category":"section"},{"location":"changelog/#Changed-26","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"trust_regions(M, f, grad_f, hess_f, p) now has the Hessian hess_f as well as the start point p0 as an optional parameter and approximate it otherwise.\ntrust_regions!(M, f, grad_f, hess_f, p) has the Hessian as an optional parameter and approximate it otherwise.","category":"page"},{"location":"changelog/#Removed-2","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"support for ManifoldsBase.jl 0.13.x, since with the definition of copy(M,p::Number), in 0.14.4, that one is used instead of defining it ourselves.","category":"page"},{"location":"changelog/#[0.4.14]-April-06,-2023","page":"Changelog","title":"[0.4.14] April 06, 2023","text":"","category":"section"},{"location":"changelog/#Changed-27","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"particle_swarm now uses much more in-place operations","category":"page"},{"location":"changelog/#Fixed-23","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"particle_swarm used quite a few deepcopy(p) commands still, which were replaced by copy(M, p)","category":"page"},{"location":"changelog/#[0.4.13]-April-09,-2023","page":"Changelog","title":"[0.4.13] April 09, 2023","text":"","category":"section"},{"location":"changelog/#Added-38","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"get_message to obtain messages from sub steps of a solver\nDebugMessages to display the new messages in debug\nsafeguards in Armijo line search and L-BFGS against numerical over- and underflow that report in messages","category":"page"},{"location":"changelog/#[0.4.12]-April-4,-2023","page":"Changelog","title":"[0.4.12] April 4, 2023","text":"","category":"section"},{"location":"changelog/#Added-39","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Introduce the Difference of Convex Algorithm (DCA) difference_of_convex_algorithm(M, f, g, ∂h, p0)\nIntroduce the Difference of Convex Proximal Point Algorithm (DCPPA) difference_of_convex_proximal_point(M, prox_g, grad_h, p0)\nIntroduce a StopWhenGradientChangeLess stopping criterion","category":"page"},{"location":"changelog/#[0.4.11]-March-27,-2023","page":"Changelog","title":"[0.4.11] March 27, 2023","text":"","category":"section"},{"location":"changelog/#Changed-28","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"adapt tolerances in tests to the speed/accuracy optimized distance on the sphere in Manifolds.jl (part II)","category":"page"},{"location":"changelog/#[0.4.10]-March-26,-2023","page":"Changelog","title":"[0.4.10] March 26, 2023","text":"","category":"section"},{"location":"changelog/#Changed-29","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"adapt tolerances in tests to the speed/accuracy optimized distance on the sphere in Manifolds.jl","category":"page"},{"location":"changelog/#[0.4.9]-March-3,-2023","page":"Changelog","title":"[0.4.9] March 3, 2023","text":"","category":"section"},{"location":"changelog/#Added-40","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"introduce a wrapper that allows line searches from LineSearches.jl to be used within Manopt.jl, introduce the manoptjl.org/stable/extensions/ page to explain the details.","category":"page"},{"location":"changelog/#[0.4.8]-February-21,-2023","page":"Changelog","title":"[0.4.8] February 21, 2023","text":"","category":"section"},{"location":"changelog/#Added-41","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"a status_summary that displays the main parameters within several structures of Manopt, most prominently a solver state","category":"page"},{"location":"changelog/#Changed-30","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Improved storage performance by introducing separate named tuples for points and vectors\nchanged the show methods of AbstractManoptSolverStates to display their `state_summary\nMove tutorials to be rendered with Quarto into the documentation.","category":"page"},{"location":"changelog/#[0.4.7]-February-14,-2023","page":"Changelog","title":"[0.4.7] February 14, 2023","text":"","category":"section"},{"location":"changelog/#Changed-31","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Bump [compat] entry of ManifoldDiff to also include 0.3","category":"page"},{"location":"changelog/#[0.4.6]-February-3,-2023","page":"Changelog","title":"[0.4.6] February 3, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-24","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Fixed a few stopping criteria even indicated to stop before the algorithm started.","category":"page"},{"location":"changelog/#[0.4.5]-January-24,-2023","page":"Changelog","title":"[0.4.5] January 24, 2023","text":"","category":"section"},{"location":"changelog/#Changed-32","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the new default functions that include p are used where possible\na first step towards faster storage handling","category":"page"},{"location":"changelog/#[0.4.4]-January-20,-2023","page":"Changelog","title":"[0.4.4] January 20, 2023","text":"","category":"section"},{"location":"changelog/#Added-42","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Introduce ConjugateGradientBealeRestart to allow CG restarts using Beale‘s rule","category":"page"},{"location":"changelog/#Fixed-25","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fix a type in HestenesStiefelCoefficient","category":"page"},{"location":"changelog/#[0.4.3]-January-17,-2023","page":"Changelog","title":"[0.4.3] January 17, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-26","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the CG coefficient β can now be complex\nfix a bug in grad_distance","category":"page"},{"location":"changelog/#[0.4.2]-January-16,-2023","page":"Changelog","title":"[0.4.2] January 16, 2023","text":"","category":"section"},{"location":"changelog/#Changed-33","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the usage of inner in line search methods, such that they work well with complex manifolds as well","category":"page"},{"location":"changelog/#[0.4.1]-January-15,-2023","page":"Changelog","title":"[0.4.1] January 15, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-27","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"a max_stepsize per manifold to avoid leaving the injectivity radius, which it also defaults to","category":"page"},{"location":"changelog/#[0.4.0]-January-10,-2023","page":"Changelog","title":"[0.4.0] January 10, 2023","text":"","category":"section"},{"location":"changelog/#Added-43","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Dependency on ManifoldDiff.jl and a start of moving actual derivatives, differentials, and gradients there.\nAbstractManifoldObjective to store the objective within the AbstractManoptProblem\nIntroduce a CostGrad structure to store a function that computes the cost and gradient within one function.\nstarted a changelog.md to thoroughly keep track of changes","category":"page"},{"location":"changelog/#Changed-34","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"AbstractManoptProblem replaces Problem\nthe problem now contains a\nAbstractManoptSolverState replaces Options\nrandom_point(M) is replaced by rand(M) from `ManifoldsBase.jl\nrandom_tangent(M, p) is replaced by rand(M; vector_at=p)","category":"page"},{"location":"solvers/gradient_descent/#Gradient-descent","page":"Gradient Descent","title":"Gradient descent","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"gradient_descent\ngradient_descent!","category":"page"},{"location":"solvers/gradient_descent/#Manopt.gradient_descent","page":"Gradient Descent","title":"Manopt.gradient_descent","text":"gradient_descent(M, f, grad_f, p=rand(M); kwargs...)\ngradient_descent(M, gradient_objective, p=rand(M); kwargs...)\ngradient_descent!(M, f, grad_f, p; kwargs...)\ngradient_descent!(M, gradient_objective, p; kwargs...)\n\nperform the gradient descent algorithm\n\np_k+1 = operatornameretr_p_kbigl( s_koperatornamegradf(p_k) bigr)\nqquad k=01\n\nwhere s_k 0 denotes a step size.\n\nThe algorithm can be performed in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nAlternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.\n\nKeyword arguments\n\ndirection=IdentityUpdateRule(): specify to perform a certain processing of the direction, for example Nesterov, MomentumGradient or AverageGradient.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=default_stepsize(M, GradientDescentState): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nIf you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gradient_descent/#Manopt.gradient_descent!","page":"Gradient Descent","title":"Manopt.gradient_descent!","text":"gradient_descent(M, f, grad_f, p=rand(M); kwargs...)\ngradient_descent(M, gradient_objective, p=rand(M); kwargs...)\ngradient_descent!(M, f, grad_f, p; kwargs...)\ngradient_descent!(M, gradient_objective, p; kwargs...)\n\nperform the gradient descent algorithm\n\np_k+1 = operatornameretr_p_kbigl( s_koperatornamegradf(p_k) bigr)\nqquad k=01\n\nwhere s_k 0 denotes a step size.\n\nThe algorithm can be performed in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nAlternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.\n\nKeyword arguments\n\ndirection=IdentityUpdateRule(): specify to perform a certain processing of the direction, for example Nesterov, MomentumGradient or AverageGradient.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=default_stepsize(M, GradientDescentState): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nIf you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gradient_descent/#State","page":"Gradient Descent","title":"State","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"GradientDescentState","category":"page"},{"location":"solvers/gradient_descent/#Manopt.GradientDescentState","page":"Gradient Descent","title":"Manopt.GradientDescentState","text":"GradientDescentState{P,T} <: AbstractGradientSolverState\n\nDescribes the state of a gradient based descent algorithm.\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\ndirection::DirectionUpdateRule : a processor to handle the obtained gradient and compute a direction to “walk into”.\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\n\nConstructor\n\nGradientDescentState(M::AbstractManifold; kwargs...)\n\nInitialize the gradient descent solver state, where\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\n\nKeyword arguments\n\ndirection=IdentityUpdateRule()\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopAfterIteration(100): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, GradientDescentState; retraction_method=retraction_method): a functor inheriting from Stepsize to determine a step size\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nSee also\n\ngradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Direction-update-rules","page":"Gradient Descent","title":"Direction update rules","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"A field of the options is the direction, a DirectionUpdateRule, which by default IdentityUpdateRule just evaluates the gradient but can be enhanced for example to","category":"page"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"AverageGradient\nDirectionUpdateRule\nIdentityUpdateRule\nMomentumGradient\nNesterov","category":"page"},{"location":"solvers/gradient_descent/#Manopt.AverageGradient","page":"Gradient Descent","title":"Manopt.AverageGradient","text":"AverageGradient(; kwargs...)\nAverageGradient(M::AbstractManifold; kwargs...)\n\nAdd an average of gradients to a gradient processor. A set of previous directions (from the inner processor) and the last iterate are stored, average is taken after vector transporting them to the current iterates tangent space.\n\nInput\n\nM (optional)\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\ndirection=IdentityUpdateRule preprocess the actual gradient before adding momentum\ngradients=[zero_vector(M, p) for _ in 1:n] how to initialise the internal storage\nn=10 number of gradient evaluations to take the mean over\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for AverageGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gradient_descent/#Manopt.DirectionUpdateRule","page":"Gradient Descent","title":"Manopt.DirectionUpdateRule","text":"DirectionUpdateRule\n\nA general functor, that handles direction update rules. It's fields are usually only a StoreStateAction by default initialized to the fields required for the specific coefficient, but can also be replaced by a (common, global) individual one that provides these values.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.IdentityUpdateRule","page":"Gradient Descent","title":"Manopt.IdentityUpdateRule","text":"IdentityUpdateRule <: DirectionUpdateRule\n\nThe default gradient direction update is the identity, usually it just evaluates the gradient.\n\nYou can also use Gradient() to create the corresponding factory, though this only delays this parameter-free instantiation to later.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.MomentumGradient","page":"Gradient Descent","title":"Manopt.MomentumGradient","text":"MomentumGradient()\n\nAppend a momentum to a gradient processor, where the last direction and last iterate are stored and the new is composed as η_i = m*η_i-1 - s d_i, where sd_i is the current (inner) direction and η_i-1 is the vector transported last direction multiplied by momentum m.\n\nInput\n\nM (optional)\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal M\ndirection=IdentityUpdateRule preprocess the actual gradient before adding momentum\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\nmomentum=0.2 amount of momentum to use\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for MomentumGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gradient_descent/#Manopt.Nesterov","page":"Gradient Descent","title":"Manopt.Nesterov","text":"Nesterov(; kwargs...)\nNesterov(M::AbstractManifold; kwargs...)\n\nAssume f is L-Lipschitz and μ-strongly convex. Given\n\na step size h_kfrac1L (from the GradientDescentState\na shrinkage parameter β_k\nand a current iterate p_k\nas well as the interim values γ_k and v_k from the previous iterate.\n\nThis compute a Nesterov type update using the following steps, see [ZS18]\n\nCompute the positive root α_k(01) of α^2 = h_kbigl((1-α_k)γ_k+α_k μbigr).\nSet barγ_k+1 = (1-α_k)γ_k + α_kμ\ny_k = operatornameretr_p_kBigl(fracα_kγ_kγ_k + α_kμoperatornameretr^-1_p_kv_k Bigr)\nx_k+1 = operatornameretr_y_k(-h_k operatornamegradf(y_k))\nv_k+1 = operatornameretr_y_kBigl(frac(1-α_k)γ_kbarγ_koperatornameretr_y_k^-1(v_k) - fracα_kbarγ_k+1operatornamegradf(y_k) Bigr)\nγ_k+1 = frac11+β_kbarγ_k+1\n\nThen the direction from p_k to p_k+1 by d = operatornameretr^-1_p_kp_k+1 is returned.\n\nInput\n\nM (optional)\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nγ=0.001`\nμ=0.9`\nshrinkage = k -> 0.8\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for NesterovRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"which internally use the ManifoldDefaultsFactory and produce the internal elements","category":"page"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"Manopt.AverageGradientRule\nManopt.ConjugateDescentCoefficientRule\nManopt.MomentumGradientRule\nManopt.NesterovRule","category":"page"},{"location":"solvers/gradient_descent/#Manopt.AverageGradientRule","page":"Gradient Descent","title":"Manopt.AverageGradientRule","text":"AverageGradientRule <: DirectionUpdateRule\n\nAdd an average of gradients to a gradient processor. A set of previous directions (from the inner processor) and the last iterate are stored. The average is taken after vector transporting them to the current iterates tangent space.\n\nFields\n\ngradients: the last n gradient/direction updates\nlast_iterate: last iterate (needed to transport the gradients)\ndirection: internal DirectionUpdateRule to determine directions to apply the averaging to\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructors\n\nAverageGradientRule(\n M::AbstractManifold;\n p::P=rand(M);\n n::Int=10\n direction::Union{<:DirectionUpdateRule,ManifoldDefaultsFactory}=IdentityUpdateRule(),\n gradients = fill(zero_vector(p.M, o.x),n),\n last_iterate = deepcopy(x0),\n vector_transport_method = default_vector_transport_method(M, typeof(p))\n)\n\nAdd average to a gradient problem, where\n\nn: determines the size of averaging\ndirection: is the internal DirectionUpdateRule to determine the gradients to store\ngradients: can be pre-filled with some history\nlast_iterate: stores the last iterate\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.ConjugateDescentCoefficientRule","page":"Gradient Descent","title":"Manopt.ConjugateDescentCoefficientRule","text":"ConjugateDescentCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient adapted to manifolds\n\nSee also conjugate_gradient_descent\n\nConstructor\n\nConjugateDescentCoefficientRule()\n\nConstruct the conjugate descent coefficient update rule, a new storage is created by default.\n\nSee also\n\nConjugateDescentCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.MomentumGradientRule","page":"Gradient Descent","title":"Manopt.MomentumGradientRule","text":"MomentumGradientRule <: DirectionUpdateRule\n\nStore the necessary information to compute the MomentumGradient direction update.\n\nFields\n\np_old::P: a point on the manifold mathcal M\nmomentum::Real: factor for the momentum\ndirection: internal DirectionUpdateRule to determine directions to add the momentum to.\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nX_old::T: a tangent vector at the point p on the manifold mathcal M\n\nConstructors\n\nMomentumGradientRule(M::AbstractManifold; kwargs...)\n\nInitialize a momentum gradient rule to s, where p and X are memory for interim values.\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal M\ns=IdentityUpdateRule()\nmomentum=0.2\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nSee also\n\nMomentumGradient\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.NesterovRule","page":"Gradient Descent","title":"Manopt.NesterovRule","text":"NesterovRule <: DirectionUpdateRule\n\nCompute a Nesterov inspired direction update rule. See Nesterov for details\n\nFields\n\nγ::Real, μ::Real: coefficients from the last iterate\nv::P: an interim point to compute the next gradient evaluation point y_k\nshrinkage: a function k -> ... to compute the shrinkage β_k per iterate k`.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\nConstructor\n\nNesterovRule(M::AbstractManifold; kwargs...)\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nγ=0.001`\nμ=0.9`\nshrinkage = k -> 0.8\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\nSee also\n\nNesterov\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Debug-actions","page":"Gradient Descent","title":"Debug actions","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"DebugGradient\nDebugGradientNorm\nDebugStepsize","category":"page"},{"location":"solvers/gradient_descent/#Manopt.DebugGradient","page":"Gradient Descent","title":"Manopt.DebugGradient","text":"DebugGradient <: DebugAction\n\ndebug for the gradient evaluated at the current iterate\n\nConstructors\n\nDebugGradient(; long=false, prefix= , format= \"$prefix%s\", io=stdout)\n\ndisplay the short (false) or long (true) default text for the gradient, or set the prefix manually. Alternatively the complete format can be set.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.DebugGradientNorm","page":"Gradient Descent","title":"Manopt.DebugGradientNorm","text":"DebugGradientNorm <: DebugAction\n\ndebug for gradient evaluated at the current iterate.\n\nConstructors\n\nDebugGradientNorm([long=false,p=print])\n\ndisplay the short (false) or long (true) default text for the gradient norm.\n\nDebugGradientNorm(prefix[, p=print])\n\ndisplay the a prefix in front of the gradient norm.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.DebugStepsize","page":"Gradient Descent","title":"Manopt.DebugStepsize","text":"DebugStepsize <: DebugAction\n\ndebug for the current step size.\n\nConstructors\n\nDebugStepsize(;long=false,prefix=\"step size:\", format=\"$prefix%s\", io=stdout)\n\ndisplay the a prefix in front of the step size.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Record-actions","page":"Gradient Descent","title":"Record actions","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"RecordGradient\nRecordGradientNorm\nRecordStepsize","category":"page"},{"location":"solvers/gradient_descent/#Manopt.RecordGradient","page":"Gradient Descent","title":"Manopt.RecordGradient","text":"RecordGradient <: RecordAction\n\nrecord the gradient evaluated at the current iterate\n\nConstructors\n\nRecordGradient(ξ)\n\ninitialize the RecordAction to the corresponding type of the tangent vector.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.RecordGradientNorm","page":"Gradient Descent","title":"Manopt.RecordGradientNorm","text":"RecordGradientNorm <: RecordAction\n\nrecord the norm of the current gradient\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.RecordStepsize","page":"Gradient Descent","title":"Manopt.RecordStepsize","text":"RecordStepsize <: RecordAction\n\nrecord the step size\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#sec-gradient-descent-technical-details","page":"Gradient Descent","title":"Technical details","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"The gradient_descent solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nBy default gradient descent uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).\nBy default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/gradient_descent/#Literature","page":"Gradient Descent","title":"Literature","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"D. G. Luenberger. The gradient projection method along geodesics. Management Science 18, 620–631 (1972).\n\n\n\nH. Zhang and S. Sra. Towards Riemannian accelerated gradient methods, arXiv Preprint, 1806.02812 (2018).\n\n\n\n","category":"page"},{"location":"solvers/#Available-solvers-in-Manopt.jl","page":"List of Solvers","title":"Available solvers in Manopt.jl","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Optimisation problems can be classified with respect to several criteria. The following list of the algorithms is a grouped with respect to the “information” available about a optimisation problem","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"operatorname*argmin_pmathbb M f(p)","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Within each group short notes on advantages of the individual solvers, and required properties the cost f should have, are provided. In that list a 🏅 is used to indicate state-of-the-art solvers, that usually perform best in their corresponding group and 🫏 for a maybe not so fast, maybe not so state-of-the-art method, that nevertheless gets the job done most reliably.","category":"page"},{"location":"solvers/#Derivative-free","page":"List of Solvers","title":"Derivative free","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"For derivative free only function evaluations of f are used.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Nelder-Mead a simplex based variant, that is using d+1 points, where d is the dimension of the manifold.\nParticle Swarm 🫏 use the evolution of a set of points, called swarm, to explore the domain of the cost and find a minimizer.\nCMA-ES uses a stochastic evolutionary strategy to perform minimization robust to local minima of the objective.","category":"page"},{"location":"solvers/#First-order","page":"List of Solvers","title":"First order","text":"","category":"section"},{"location":"solvers/#Gradient","page":"List of Solvers","title":"Gradient","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Gradient Descent uses the gradient from f to determine a descent direction. Here, the direction can also be changed to be Averaged, Momentum-based, based on Nesterovs rule.\nConjugate Gradient Descent uses information from the previous descent direction to improve the current (gradient-based) one including several such update rules.\nThe Quasi-Newton Method 🏅 uses gradient evaluations to approximate the Hessian, which is then used in a Newton-like scheme, where both a limited memory and a full Hessian approximation are available with several different update rules.\nSteihaug-Toint Truncated Conjugate-Gradient Method a solver for a constrained problem defined on a tangent space.","category":"page"},{"location":"solvers/#Subgradient","page":"List of Solvers","title":"Subgradient","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The following methods require the Riemannian subgradient f to be available. While the subgradient might be set-valued, the function should provide one of the subgradients.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The Subgradient Method takes the negative subgradient as a step direction and can be combined with a step size.\nThe Convex Bundle Method (CBM) uses a former collection of sub gradients at the previous iterates and iterate candidates to solve a local approximation to f in every iteration by solving a quadratic problem in the tangent space.\nThe Proximal Bundle Method works similar to CBM, but solves a proximal map-based problem in every iteration.","category":"page"},{"location":"solvers/#Second-order","page":"List of Solvers","title":"Second order","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Adaptive Regularisation with Cubics 🏅 locally builds a cubic model to determine the next descent direction.\nThe Riemannian Trust-Regions Solver builds a quadratic model within a trust region to determine the next descent direction.","category":"page"},{"location":"solvers/#Splitting-based","page":"List of Solvers","title":"Splitting based","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"For splitting methods, the algorithms are based on splitting the cost into different parts, usually in a sum of two or more summands. This is usually very well tailored for non-smooth objectives.","category":"page"},{"location":"solvers/#Smooth","page":"List of Solvers","title":"Smooth","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The following methods require that the splitting, for example into several summands, is smooth in the sense that for every summand of the cost, the gradient should still exist everywhere","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Levenberg-Marquardt minimizes the square norm of f mathcal Mℝ^d provided the gradients of the component functions, or in other words the Jacobian of f.\nStochastic Gradient Descent is based on a splitting of f into a sum of several components f_i whose gradients are provided. Steps are performed according to gradients of randomly selected components.\nThe Alternating Gradient Descent alternates gradient descent steps on the components of the product manifold. All these components should be smooth as it is required, that the gradient exists, and is (locally) convex.","category":"page"},{"location":"solvers/#Nonsmooth","page":"List of Solvers","title":"Nonsmooth","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"If the gradient does not exist everywhere, that is if the splitting yields summands that are nonsmooth, usually methods based on proximal maps are used.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The Chambolle-Pock algorithm uses a splitting f(p) = F(p) + G(Λ(p)), where G is defined on a manifold mathcal N and the proximal map of its Fenchel dual is required. Both these functions can be non-smooth.\nThe Cyclic Proximal Point 🫏 uses proximal maps of the functions from splitting f into summands f_i\nDifference of Convex Algorithm (DCA) uses a splitting of the (non-convex) function f = g - h into a difference of two functions; for each of these it is required to have access to the gradient of g and the subgradient of h to state a sub problem in every iteration to be solved.\nDifference of Convex Proximal Point uses a splitting of the (non-convex) function f = g - h into a difference of two functions; provided the proximal map of g and the subgradient of h, the next iterate is computed. Compared to DCA, the corresponding sub problem is here written in a form that yields the proximal map.\nDouglas—Rachford uses a splitting f(p) = F(x) + G(x) and their proximal maps to compute a minimizer of f, which can be non-smooth.\nPrimal-dual Riemannian semismooth Newton Algorithm extends Chambolle-Pock and requires the differentials of the proximal maps additionally.\nThe Proximal Point uses the proximal map of f iteratively.","category":"page"},{"location":"solvers/#Constrained","page":"List of Solvers","title":"Constrained","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Constrained problems of the form","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"beginalign*\noperatorname*argmin_pmathbb M f(p)\ntextsuch that g(p) leq 0h(p) = 0\nendalign*","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"For these you can use","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The Augmented Lagrangian Method (ALM), where both g and grad_g as well as h and grad_h are keyword arguments, and one of these pairs is mandatory.\nThe Exact Penalty Method (EPM) uses a penalty term instead of augmentation, but has the same interface as ALM.\nThe Interior Point Newton Method (IPM) rephrases the KKT system of a constrained problem into an Newton iteration being performed in every iteration.\nFrank-Wolfe algorithm, where besides the gradient of f either a closed form solution or a (maybe even automatically generated) sub problem solver for operatorname*argmin_q C operatornamegrad f(p_k) log_p_kq is required, where p_k is a fixed point on the manifold (changed in every iteration).","category":"page"},{"location":"solvers/#On-the-tangent-space","page":"List of Solvers","title":"On the tangent space","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Conjugate Residual a solver for a linear system mathcal AX + b = 0 on a tangent space.\nSteihaug-Toint Truncated Conjugate-Gradient Method a solver for a constrained problem defined on a tangent space.","category":"page"},{"location":"solvers/#Alphabetical-list-List-of-algorithms","page":"List of Solvers","title":"Alphabetical list List of algorithms","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Solver Function State\nAdaptive Regularisation with Cubics adaptive_regularization_with_cubics AdaptiveRegularizationState\nAugmented Lagrangian Method augmented_Lagrangian_method AugmentedLagrangianMethodState\nChambolle-Pock ChambollePock ChambollePockState\nConjugate Gradient Descent conjugate_gradient_descent ConjugateGradientDescentState\nConjugate Residual conjugate_residual ConjugateResidualState\nConvex Bundle Method convex_bundle_method ConvexBundleMethodState\nCyclic Proximal Point cyclic_proximal_point CyclicProximalPointState\nDifference of Convex Algorithm difference_of_convex_algorithm DifferenceOfConvexState\nDifference of Convex Proximal Point difference_of_convex_proximal_point DifferenceOfConvexProximalState\nDouglas—Rachford DouglasRachford DouglasRachfordState\nExact Penalty Method exact_penalty_method ExactPenaltyMethodState\nFrank-Wolfe algorithm Frank_Wolfe_method FrankWolfeState\nGradient Descent gradient_descent GradientDescentState\nInterior Point Newton interior_point_Newton \nLevenberg-Marquardt LevenbergMarquardt LevenbergMarquardtState\nNelder-Mead NelderMead NelderMeadState\nParticle Swarm particle_swarm ParticleSwarmState\nPrimal-dual Riemannian semismooth Newton Algorithm primal_dual_semismooth_Newton PrimalDualSemismoothNewtonState\nProximal Bundle Method proximal_bundle_method ProximalBundleMethodState\nProximal Point proximal_point ProximalPointState\nQuasi-Newton Method quasi_Newton QuasiNewtonState\nSteihaug-Toint Truncated Conjugate-Gradient Method truncated_conjugate_gradient_descent TruncatedConjugateGradientState\nSubgradient Method subgradient_method SubGradientMethodState\nStochastic Gradient Descent stochastic_gradient_descent StochasticGradientDescentState\nRiemannian Trust-Regions trust_regions TrustRegionsState","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Note that the solvers (their AbstractManoptSolverState, to be precise) can also be decorated to enhance your algorithm by general additional properties, see debug output and recording values. This is done using the debug= and record= keywords in the function calls. Similarly, a cache= keyword is available in any of the function calls, that wraps the AbstractManoptProblem in a cache for certain parts of the objective.","category":"page"},{"location":"solvers/#Technical-details","page":"List of Solvers","title":"Technical details","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The main function a solver calls is","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"solve!(p::AbstractManoptProblem, s::AbstractManoptSolverState)","category":"page"},{"location":"solvers/#Manopt.solve!-Tuple{AbstractManoptProblem, AbstractManoptSolverState}","page":"List of Solvers","title":"Manopt.solve!","text":"solve!(p::AbstractManoptProblem, s::AbstractManoptSolverState)\n\nrun the solver implemented for the AbstractManoptProblemp and the AbstractManoptSolverStates employing initialize_solver!, step_solver!, as well as the stop_solver! of the solver.\n\n\n\n\n\n","category":"method"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"which is a framework that you in general should not change or redefine. It uses the following methods, which also need to be implemented on your own algorithm, if you want to provide one.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"initialize_solver!\nstep_solver!\nget_solver_result\nget_solver_return\nstop_solver!(p::AbstractManoptProblem, s::AbstractManoptSolverState, Any)","category":"page"},{"location":"solvers/#Manopt.initialize_solver!","page":"List of Solvers","title":"Manopt.initialize_solver!","text":"initialize_solver!(ams::AbstractManoptProblem, amp::AbstractManoptSolverState)\n\nInitialize the solver to the optimization AbstractManoptProblem amp by initializing the necessary values in the AbstractManoptSolverState amp.\n\n\n\n\n\ninitialize_solver!(amp::AbstractManoptProblem, dss::DebugSolverState)\n\nExtend the initialization of the solver by a hook to run the DebugAction that was added to the :Start entry of the debug lists. All others are triggered (with iteration number 0) to trigger possible resets\n\n\n\n\n\ninitialize_solver!(ams::AbstractManoptProblem, rss::RecordSolverState)\n\nExtend the initialization of the solver by a hook to run records that were added to the :Start entry.\n\n\n\n\n\n","category":"function"},{"location":"solvers/#Manopt.step_solver!","page":"List of Solvers","title":"Manopt.step_solver!","text":"step_solver!(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, k)\n\nDo one iteration step (the ith) for an AbstractManoptProblemp by modifying the values in the AbstractManoptSolverState ams.\n\n\n\n\n\nstep_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)\n\nExtend the ith step of the solver by a hook to run debug prints, that were added to the :BeforeIteration and :Iteration entries of the debug lists.\n\n\n\n\n\nstep_solver!(amp::AbstractManoptProblem, rss::RecordSolverState, k)\n\nExtend the ith step of the solver by a hook to run records, that were added to the :Iteration entry.\n\n\n\n\n\n","category":"function"},{"location":"solvers/#Manopt.get_solver_result","page":"List of Solvers","title":"Manopt.get_solver_result","text":"get_solver_result(ams::AbstractManoptSolverState)\nget_solver_result(tos::Tuple{AbstractManifoldObjective,AbstractManoptSolverState})\nget_solver_result(o::AbstractManifoldObjective, s::AbstractManoptSolverState)\n\nReturn the final result after all iterations that is stored within the AbstractManoptSolverState ams, which was modified during the iterations.\n\nFor the case the objective is passed as well, but default, the objective is ignored, and the solver result for the state is called.\n\n\n\n\n\n","category":"function"},{"location":"solvers/#Manopt.get_solver_return","page":"List of Solvers","title":"Manopt.get_solver_return","text":"get_solver_return(s::AbstractManoptSolverState)\nget_solver_return(o::AbstractManifoldObjective, s::AbstractManoptSolverState)\n\ndetermine the result value of a call to a solver. By default this returns the same as get_solver_result.\n\nget_solver_return(s::ReturnSolverState)\nget_solver_return(o::AbstractManifoldObjective, s::ReturnSolverState)\n\nreturn the internally stored state of the ReturnSolverState instead of the minimizer. This means that when the state are decorated like this, the user still has to call get_solver_result on the internal state separately.\n\nget_solver_return(o::ReturnManifoldObjective, s::AbstractManoptSolverState)\n\nreturn both the objective and the state as a tuple.\n\n\n\n\n\n","category":"function"},{"location":"solvers/#Manopt.stop_solver!-Tuple{AbstractManoptProblem, AbstractManoptSolverState, Any}","page":"List of Solvers","title":"Manopt.stop_solver!","text":"stop_solver!(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, k)\n\ndepending on the current AbstractManoptProblem amp, the current state of the solver stored in AbstractManoptSolverState ams and the current iterate i this function determines whether to stop the solver, which by default means to call the internal StoppingCriterion. ams.stop\n\n\n\n\n\n","category":"method"},{"location":"solvers/#API-for-solvers","page":"List of Solvers","title":"API for solvers","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"this is a short overview of the different types of high-level functions are usually available for a solver. Assume the solver is called new_solver and requires a cost f and some first order information df as well as a starting point p on M. f and df form the objective together called obj.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Then there are basically two different variants to call","category":"page"},{"location":"solvers/#The-easy-to-access-call","page":"List of Solvers","title":"The easy to access call","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"new_solver(M, f, df, p=rand(M); kwargs...)\nnew_solver!(M, f, df, p; kwargs...)","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Where the start point should be optional. Keyword arguments include the type of evaluation, decorators like debug= or record= as well as algorithm specific ones. If you provide an immutable point p or the rand(M) point is immutable, like on the Circle() this method should turn the point into a mutable one as well.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The third variant works in place of p, so it is mandatory.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"This first interface would set up the objective and pass all keywords on the objective based call.","category":"page"},{"location":"solvers/#Objective-based-calls-to-solvers","page":"List of Solvers","title":"Objective based calls to solvers","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"new_solver(M, obj, p=rand(M); kwargs...)\nnew_solver!(M, obj, p; kwargs...)","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Here the objective would be created beforehand for example to compare different solvers on the same objective, and for the first variant the start point is optional. Keyword arguments include decorators like debug= or record= as well as algorithm specific ones.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"This variant would generate the problem and the state and verify validity of all provided keyword arguments that affect the state. Then it would call the iterate process.","category":"page"},{"location":"solvers/#Manual-calls","page":"List of Solvers","title":"Manual calls","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"If you generate the corresponding problem and state as the previous step does, you can also use the third (lowest level) and just call","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"solve!(problem, state)","category":"page"},{"location":"solvers/#Closed-form-subsolvers","page":"List of Solvers","title":"Closed-form subsolvers","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"If a subsolver solution is available in closed form, ClosedFormSubSolverState is used to indicate that.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Manopt.ClosedFormSubSolverState","category":"page"},{"location":"solvers/#Manopt.ClosedFormSubSolverState","page":"List of Solvers","title":"Manopt.ClosedFormSubSolverState","text":"ClosedFormSubSolverState{E<:AbstractEvaluationType} <: AbstractManoptSolverState\n\nSubsolver state indicating that a closed-form solution is available with AbstractEvaluationType E.\n\nConstructor\n\nClosedFormSubSolverState(; evaluation=AllocatingEvaluation())\n\n\n\n\n\n","category":"type"},{"location":"extensions/#Extensions","page":"Extensions","title":"Extensions","text":"","category":"section"},{"location":"extensions/#LineSearches.jl","page":"Extensions","title":"LineSearches.jl","text":"","category":"section"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt can be used with line search algorithms implemented in LineSearches.jl. This can be illustrated by the following example of optimizing Rosenbrock function constrained to the unit sphere.","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"using Manopt, Manifolds, LineSearches\n\n# define objective function and its gradient\np = [1.0, 100.0]\nfunction rosenbrock(::AbstractManifold, x)\n val = zero(eltype(x))\n for i in 1:(length(x) - 1)\n val += (p[1] - x[i])^2 + p[2] * (x[i + 1] - x[i]^2)^2\n end\n return val\nend\nfunction rosenbrock_grad!(M::AbstractManifold, storage, x)\n storage .= 0.0\n for i in 1:(length(x) - 1)\n storage[i] += -2.0 * (p[1] - x[i]) - 4.0 * p[2] * (x[i + 1] - x[i]^2) * x[i]\n storage[i + 1] += 2.0 * p[2] * (x[i + 1] - x[i]^2)\n end\n project!(M, storage, x, storage)\n return storage\nend\n# define constraint\nn_dims = 5\nM = Manifolds.Sphere(n_dims)\n# set initial point\nx0 = vcat(zeros(n_dims - 1), 1.0)\n# use LineSearches.jl HagerZhang method with Manopt.jl quasiNewton solver\nls_hz = Manopt.LineSearchesStepsize(M, LineSearches.HagerZhang())\nx_opt = quasi_Newton(\n M,\n rosenbrock,\n rosenbrock_grad!,\n x0;\n stepsize=ls_hz,\n evaluation=InplaceEvaluation(),\n stopping_criterion=StopAfterIteration(1000) | StopWhenGradientNormLess(1e-6),\n return_state=true,\n)","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"In general this defines the following new stepsize","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt.LineSearchesStepsize","category":"page"},{"location":"extensions/#Manopt.LineSearchesStepsize","page":"Extensions","title":"Manopt.LineSearchesStepsize","text":"LineSearchesStepsize <: Stepsize\n\nWrapper for line searches available in the LineSearches.jl library.\n\nConstructors\n\nLineSearchesStepsize(M::AbstractManifold, linesearch; kwargs...\nLineSearchesStepsize(\n linesearch;\n retraction_method=ExponentialRetraction(),\n vector_transport_method=ParallelTransport(),\n)\n\nWrap linesearch (for example HagerZhang or MoreThuente). The initial step selection from Linesearches.jl is not yet supported and the value 1.0 is used.\n\nKeyword Arguments\n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"extensions/#Manifolds.jl","page":"Extensions","title":"Manifolds.jl","text":"","category":"section"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Loading Manifolds.jl introduces the following additional functions","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt.max_stepsize(::FixedRankMatrices, ::Any)\nManopt.max_stepsize(::Hyperrectangle, ::Any)\nManopt.max_stepsize(::TangentBundle, ::Any)\nmid_point","category":"page"},{"location":"extensions/#Manopt.max_stepsize-Tuple{FixedRankMatrices, Any}","page":"Extensions","title":"Manopt.max_stepsize","text":"max_stepsize(M::FixedRankMatrices, p)\n\nReturn a reasonable guess of maximum step size on FixedRankMatrices following the choice of typical distance in Matlab Manopt, the dimension of M. See this note\n\n\n\n\n\n","category":"method"},{"location":"extensions/#Manopt.max_stepsize-Tuple{Hyperrectangle, Any}","page":"Extensions","title":"Manopt.max_stepsize","text":"max_stepsize(M::Hyperrectangle, p)\n\nThe default maximum stepsize for Hyperrectangle manifold with corners is maximum of distances from p to each boundary.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#Manopt.max_stepsize-Tuple{FiberBundle{𝔽, ManifoldsBase.TangentSpaceType, M} where {𝔽, M<:AbstractManifold{𝔽}}, Any}","page":"Extensions","title":"Manopt.max_stepsize","text":"max_stepsize(M::TangentBundle, p)\n\nTangent bundle has injectivity radius of either infinity (for flat manifolds) or 0 (for non-flat manifolds). This makes a guess of what a reasonable maximum stepsize on a tangent bundle might be.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#ManifoldsBase.mid_point","page":"Extensions","title":"ManifoldsBase.mid_point","text":"mid_point(M, p, q, x)\nmid_point!(M, y, p, q, x)\n\nCompute the mid point between p and q. If there is more than one mid point of (not necessarily minimizing) geodesics (for example on the sphere), the one nearest to x is returned (in place of y).\n\n\n\n\n\n","category":"function"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Internally, Manopt.jl provides the two additional functions to choose some Euclidean space when needed as","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt.Rn\nManopt.Rn_default","category":"page"},{"location":"extensions/#Manopt.Rn","page":"Extensions","title":"Manopt.Rn","text":"Rn(args; kwargs...)\nRn(s::Symbol=:Manifolds, args; kwargs...)\n\nA small internal helper function to choose a Euclidean space. By default, this uses the DefaultManifold unless you load a more advanced Euclidean space like Euclidean from Manifolds.jl\n\n\n\n\n\n","category":"function"},{"location":"extensions/#Manopt.Rn_default","page":"Extensions","title":"Manopt.Rn_default","text":"Rn_default()\n\nSpecify a default value to dispatch Rn on. This default is set to Manifolds, indicating, that when this package is loded, it is the preferred package to ask for a vector space space.\n\nThe default within Manopt.jl is to use the DefaultManifold from ManifoldsBase.jl. If you load Manifolds.jl this switches to using Euclidan.\n\n\n\n\n\n","category":"function"},{"location":"extensions/#JuMP.jl","page":"Extensions","title":"JuMP.jl","text":"","category":"section"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt can be used using the JuMP.jl interface. The manifold is provided in the @variable macro. Note that until now, only variables (points on manifolds) are supported, that are arrays, especially structs do not yet work. The algebraic expression of the objective function is specified in the @objective macro. The descent_state_type attribute specifies the solver.","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"using JuMP, Manopt, Manifolds\nmodel = Model(Manopt.Optimizer)\n# Change the solver with this option, `GradientDescentState` is the default\nset_attribute(\"descent_state_type\", GradientDescentState)\n@variable(model, U[1:2, 1:2] in Stiefel(2, 2), start = 1.0)\n@objective(model, Min, sum((A - U) .^ 2))\noptimize!(model)\nsolution_summary(model)","category":"page"},{"location":"extensions/#Interface-functions","page":"Extensions","title":"Interface functions","text":"","category":"section"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt.JuMP_ArrayShape\nManopt.JuMP_VectorizedManifold\nMOI.dimension(::Manopt.JuMP_VectorizedManifold)\nManopt.JuMP_Optimizer\nMOI.empty!(::Manopt.JuMP_Optimizer)\nMOI.supports(::Manopt.JuMP_Optimizer, ::MOI.RawOptimizerAttribute)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.RawOptimizerAttribute)\nMOI.set(::Manopt.JuMP_Optimizer, ::MOI.RawOptimizerAttribute, ::Any)\nMOI.supports_incremental_interface(::Manopt.JuMP_Optimizer)\nMOI.copy_to(::Manopt.JuMP_Optimizer, ::MOI.ModelLike)\nMOI.supports_add_constrained_variables(::Manopt.JuMP_Optimizer, ::Type{<:Manopt.JuMP_VectorizedManifold})\nMOI.add_constrained_variables(::Manopt.JuMP_Optimizer, ::Manopt.JuMP_VectorizedManifold)\nMOI.is_valid(model::Manopt.JuMP_Optimizer, ::MOI.VariableIndex)\nMOI.get(model::Manopt.JuMP_Optimizer, ::MOI.NumberOfVariables)\nMOI.supports(::Manopt.JuMP_Optimizer, ::MOI.VariablePrimalStart, ::Type{MOI.VariableIndex})\nMOI.set(::Manopt.JuMP_Optimizer, ::MOI.VariablePrimalStart, ::MOI.VariableIndex, ::Union{Real,Nothing})\nMOI.set(::Manopt.JuMP_Optimizer, ::MOI.ObjectiveSense, ::MOI.OptimizationSense)\nMOI.set(::Manopt.JuMP_Optimizer, ::MOI.ObjectiveFunction{F}, ::F) where {F}\nMOI.supports(::Manopt.JuMP_Optimizer, ::Union{MOI.ObjectiveSense,MOI.ObjectiveFunction})\nJuMP.build_variable(::Function, ::Any, ::Manopt.AbstractManifold)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.ResultCount)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.SolverName)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.ObjectiveValue)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.PrimalStatus)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.DualStatus)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.TerminationStatus)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.SolverVersion)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.ObjectiveSense)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.VariablePrimal, ::MOI.VariableIndex)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.RawStatusString)","category":"page"},{"location":"extensions/#Manopt.JuMP_ArrayShape","page":"Extensions","title":"Manopt.JuMP_ArrayShape","text":"struct ArrayShape{N} <: JuMP.AbstractShape\n\nShape of an Array{T,N} of size size.\n\n\n\n\n\n","category":"type"},{"location":"extensions/#Manopt.JuMP_VectorizedManifold","page":"Extensions","title":"Manopt.JuMP_VectorizedManifold","text":"struct VectorizedManifold{M} <: MOI.AbstractVectorSet\n manifold::M\nend\n\nRepresentation of points of manifold as a vector of R^n where n is MOI.dimension(VectorizedManifold(manifold)).\n\n\n\n\n\n","category":"type"},{"location":"extensions/#MathOptInterface.dimension-Tuple{ManoptJuMPExt.VectorizedManifold}","page":"Extensions","title":"MathOptInterface.dimension","text":"MOI.dimension(set::VectorizedManifold)\n\nReturn the representation side of points on the (vectorized in representation) manifold. As the MOI variables are real, this means if the representation_size yields (in product) n, this refers to the vectorized point / tangent vector from (a subset of ℝ^n).\n\n\n\n\n\n","category":"method"},{"location":"extensions/#Manopt.JuMP_Optimizer","page":"Extensions","title":"Manopt.JuMP_Optimizer","text":"Manopt.JuMP_Optimizer()\n\nCreates a new optimizer object for the MathOptInterface (MOI). An alias Manopt.JuMP_Optimizer is defined for convenience.\n\nThe minimization of a function f(X) of an array X[1:n1,1:n2,...] over a manifold M starting at X0, can be modeled as follows:\n\nusing JuMP\nmodel = Model(Manopt.JuMP_Optimizer)\n@variable(model, X[i1=1:n1,i2=1:n2,...] in M, start = X0[i1,i2,...])\n@objective(model, Min, f(X))\n\nThe optimizer assumes that M has a Array shape described by ManifoldsBase.representation_size.\n\n\n\n\n\n","category":"type"},{"location":"extensions/#MathOptInterface.empty!-Tuple{ManoptJuMPExt.Optimizer}","page":"Extensions","title":"MathOptInterface.empty!","text":"MOI.empty!(model::ManoptJuMPExt.Optimizer)\n\nClear all model data from model but keep the options set.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.supports-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.RawOptimizerAttribute}","page":"Extensions","title":"MathOptInterface.supports","text":"MOI.supports(::Optimizer, attr::MOI.RawOptimizerAttribute)\n\nReturn a Bool indicating whether attr.name is a valid option name for Manopt.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.RawOptimizerAttribute}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, attr::MOI.RawOptimizerAttribute)\n\nReturn last value set by MOI.set(model, attr, value).\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.set-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.RawOptimizerAttribute, Any}","page":"Extensions","title":"MathOptInterface.set","text":"MOI.get(model::Optimizer, attr::MOI.RawOptimizerAttribute)\n\nSet the value for the keyword argument attr.name to give for the constructor model.options[DESCENT_STATE_TYPE].\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.supports_incremental_interface-Tuple{ManoptJuMPExt.Optimizer}","page":"Extensions","title":"MathOptInterface.supports_incremental_interface","text":"MOI.supports_incremental_interface(::JuMP_Optimizer)\n\nReturn true indicating that Manopt.JuMP_Optimizer implements MOI.add_constrained_variables and MOI.set for MOI.ObjectiveFunction so it can be used with JuMP.direct_model and does not require a MOI.Utilities.CachingOptimizer. See MOI.supports_incremental_interface.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.copy_to-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ModelLike}","page":"Extensions","title":"MathOptInterface.copy_to","text":"MOI.copy_to(dest::Optimizer, src::MOI.ModelLike)\n\nBecause supports_incremental_interface(dest) is true, this simply uses MOI.Utilities.default_copy_to and copies the variables with MOI.add_constrained_variables and the objective sense with MOI.set.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.supports_add_constrained_variables-Tuple{ManoptJuMPExt.Optimizer, Type{<:ManoptJuMPExt.VectorizedManifold}}","page":"Extensions","title":"MathOptInterface.supports_add_constrained_variables","text":"MOI.supports_add_constrained_variables(::JuMP_Optimizer, ::Type{<:VectorizedManifold})\n\nReturn true indicating that Manopt.JuMP_Optimizer support optimization on variables constrained to belong in a vectorized manifold Manopt.JuMP_VectorizedManifold.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.add_constrained_variables-Tuple{ManoptJuMPExt.Optimizer, ManoptJuMPExt.VectorizedManifold}","page":"Extensions","title":"MathOptInterface.add_constrained_variables","text":"MOI.add_constrained_variables(model::Optimizer, set::VectorizedManifold)\n\nAdd MOI.dimension(set) variables constrained in set and return the list of variable indices that can be used to reference them as well a constraint index for the constraint enforcing the membership of the variables in the Manopt.JuMP_VectorizedManifold set.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.is_valid-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.VariableIndex}","page":"Extensions","title":"MathOptInterface.is_valid","text":"MOI.is_valid(model::Optimizer, vi::MOI.VariableIndex)\n\nReturn whether vi is a valid variable index.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.NumberOfVariables}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, ::MOI.NumberOfVariables)\n\nReturn the number of variables added in the model, this corresponds to the MOI.dimension of the Manopt.JuMP_VectorizedManifold.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.supports-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.VariablePrimalStart, Type{MathOptInterface.VariableIndex}}","page":"Extensions","title":"MathOptInterface.supports","text":"MOI.supports(::Manopt.JuMP_Optimizer, attr::MOI.RawOptimizerAttribute)\n\nReturn true indicating that Manopt.JuMP_Optimizer supports starting values for the variables.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.set-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.VariablePrimalStart, MathOptInterface.VariableIndex, Union{Nothing, Real}}","page":"Extensions","title":"MathOptInterface.set","text":"function MOI.set(\n model::Optimizer,\n ::MOI.VariablePrimalStart,\n vi::MOI.VariableIndex,\n value::Union{Real,Nothing},\n)\n\nSet the starting value of the variable of index vi to value. Note that if value is nothing then it essentially unset any previous starting values set and hence MOI.optimize! unless another starting value is set.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.set-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ObjectiveSense, MathOptInterface.OptimizationSense}","page":"Extensions","title":"MathOptInterface.set","text":"MOI.set(model::Optimizer, ::MOI.ObjectiveSense, sense::MOI.OptimizationSense)\n\nModify the objective sense to either MOI.MAX_SENSE, MOI.MIN_SENSE or MOI.FEASIBILITY_SENSE.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.set-Union{Tuple{F}, Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ObjectiveFunction{F}, F}} where F","page":"Extensions","title":"MathOptInterface.set","text":"MOI.set(model::Optimizer, ::MOI.ObjectiveFunction{F}, func::F) where {F}\n\nSet the objective function as func for model.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.supports-Tuple{ManoptJuMPExt.Optimizer, Union{MathOptInterface.ObjectiveSense, MathOptInterface.ObjectiveFunction}}","page":"Extensions","title":"MathOptInterface.supports","text":"MOI.supports(::Optimizer, ::Union{MOI.ObjectiveSense,MOI.ObjectiveFunction})\n\nReturn true indicating that Optimizer supports being set the objective sense (that is, min, max or feasibility) and the objective function.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#JuMP.build_variable-Tuple{Function, Any, AbstractManifold}","page":"Extensions","title":"JuMP.build_variable","text":"JuMP.build_variable(::Function, func, m::ManifoldsBase.AbstractManifold)\n\nBuild a JuMP.VariablesConstrainedOnCreation object containing variables and the Manopt.JuMP_VectorizedManifold in which they should belong as well as the shape that can be used to go from the vectorized MOI representation to the shape of the manifold, that is, Manopt.JuMP_ArrayShape.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ResultCount}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, ::MOI.ResultCount)\n\nReturn 0 if optimize! hasn't been called yet and 1 otherwise indicating that one solution is available.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.SolverName}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(::Optimizer, ::MOI.SolverName)\n\nReturn the name of the Optimizer with the value of the descent_state_type option.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ObjectiveValue}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, attr::MOI.ObjectiveValue)\n\nReturn the value of the objective function evaluated at the solution.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.PrimalStatus}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, ::MOI.PrimalStatus)\n\nReturn MOI.NO_SOLUTION if optimize! hasn't been called yet and MOI.FEASIBLE_POINT otherwise indicating that a solution is available to query with MOI.VariablePrimalStart.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.DualStatus}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(::Optimizer, ::MOI.DualStatus)\n\nReturns MOI.NO_SOLUTION indicating that there is no dual solution available.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.TerminationStatus}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, ::MOI.ResultCount)\n\nReturn MOI.OPTIMIZE_NOT_CALLED if optimize! hasn't been called yet and MOI.LOCALLY_SOLVED otherwise indicating that the solver has solved the problem to local optimality the value of MOI.RawStatusString for more details on why the solver stopped.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.SolverVersion}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(::Optimizer, ::MOI.SolverVersion)\n\nReturn the version of the Manopt solver, it corresponds to the version of Manopt.jl.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ObjectiveSense}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.set(model::Optimizer, ::MOI.ObjectiveSense, sense::MOI.OptimizationSense)\n\nReturn the objective sense, defaults to MOI.FEASIBILITY_SENSE if no sense has already been set.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.VariablePrimal, MathOptInterface.VariableIndex}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, attr::MOI.VariablePrimal, vi::MOI.VariableIndex)\n\nReturn the value of the solution for the variable of index vi.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.RawStatusString}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, ::MOI.RawStatusString)\n\nReturn a String containing Manopt.get_reason without the ending newline character.\n\n\n\n\n\n","category":"method"},{"location":"tutorials/ImplementOwnManifold/#Optimize-on-your-own-manifold","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"CurrentModule = Manopt","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"When you have used a few solvers from Manopt.jl for example like in the opening tutorial 🏔️ Get started: optimize! and also familiarized yourself with how to work with manifolds in general at 🚀 Get Started with Manifolds.jl, you might come across the point that you want to implementing a manifold yourself and use it within Manopt.jl. A challenge might be, which functions are necessary, since the overall interface of ManifoldsBase.jl is maybe not completely necessary.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"This tutorial aims to help you through these steps to implement necessary parts of a manifold to get started with the solver you have in mind.","category":"page"},{"location":"tutorials/ImplementOwnManifold/#An-example-problem","page":"Optimize on your own manifold","title":"An example problem","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We get started by loading the packages we need.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"using LinearAlgebra, Manifolds, ManifoldsBase, Random\nusing Manopt\nRandom.seed!(42)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We also define the same manifold as in the implementing a manifold tutorial.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"\"\"\"\n ScaledSphere <: AbstractManifold{ℝ}\n\nDefine a sphere of fixed radius\n\n# Fields\n\n* `dimension` dimension of the sphere\n* `radius` the radius of the sphere\n\n# Constructor\n\n ScaledSphere(dimension,radius)\n\nInitialize the manifold to a certain `dimension` and `radius`,\nwhich by default is set to `1.0`\n\"\"\"\nstruct ScaledSphere <: AbstractManifold{ℝ}\n dimension::Int\n radius::Float64\nend","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We would like to compute a mean and/or median similar to 🏔️ Get started: optimize!. For given a set of points q_1ldotsq_n we want to compute [Kar77]","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":" operatorname*argmin_pmathcal M\n frac12n sum_i=1^n d_mathcal M^2(p q_i)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"On the ScaledSphere we just defined. We define a few parameters first","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"d = 5 # dimension of the sphere - embedded in R^{d+1}\nr = 2.0 # radius of the sphere\nN = 100 # data set size\n\nM = ScaledSphere(d,r)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"ScaledSphere(5, 2.0)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"If we generate a few points","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"# generate 100 points around the north pole\npts = [ [zeros(d)..., M.radius] .+ 0.5.*([rand(d)...,0.5] .- 0.5) for _=1:N]\n# project them onto the r-sphere\npts = [ r/norm(p) .* p for p in pts]","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Then, before starting with optimization, we need the distance on the manifold, to define the cost function, as well as the logarithmic map to defined the gradient. For both, we here use the “lazy” approach of using the Sphere as a fallback. Finally, we have to provide information about how points and tangent vectors are stored on the manifold by implementing their representation_size function, which is often required when allocating memory. While we could","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"import ManifoldsBase: distance, log, representation_size\nfunction distance(M::ScaledSphere, p, q)\n return M.radius * distance(Sphere(M.dimension), p ./ M.radius, q ./ M.radius)\nend\nfunction log(M::ScaledSphere, p, q)\n return M.radius * log(Sphere(M.dimension), p ./ M.radius, q ./ M.radius)\nend\nrepresentation_size(M::ScaledSphere) = (M.dimension+1,)","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Define-the-cost-and-gradient","page":"Optimize on your own manifold","title":"Define the cost and gradient","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"f(M, q) = sum(distance(M, q, p)^2 for p in pts)\ngrad_f(M,q) = sum( - log(M, q, p) for p in pts)","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Defining-the-necessary-functions-to-run-a-solver","page":"Optimize on your own manifold","title":"Defining the necessary functions to run a solver","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"The documentation usually lists the necessary functions in a section “Technical Details” close to the end of the documentation of a solver, for our case that is The gradient descent’s Technical Details,","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"They list all details, but we can start even step by step here if we are a bit careful.","category":"page"},{"location":"tutorials/ImplementOwnManifold/#A-retraction","page":"Optimize on your own manifold","title":"A retraction","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We first implement a retraction. Informally, given a current point and a direction to “walk into” we need a function that performs that walk. Since we take an easy one that just projects onto the sphere, we use the ProjectionRetraction type. To be precise, we have to implement the in-place variant retract_project!","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"import ManifoldsBase: retract_project!\nfunction retract_project!(M::ScaledSphere, q, p, X, t::Number)\n q .= p .+ t .* X\n q .*= M.radius / norm(q)\n return q\nend","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"retract_project! (generic function with 19 methods)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"The other two technical remarks refer to the step size and the stopping criterion, so if we set these to something simpler, we should already be able to do a first run.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We have to specify","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"that we want to use the new retraction,\na simple step size and stopping criterion","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We start with a certain point of cost","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"p0 = [zeros(d)...,1.0]\nf(M,p0)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"444.60374551157634","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Then we can run our first solver, where we have to overwrite a few defaults, which would use functions we do not (yet) have. Let’s discuss these in the next steps.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"q1 = gradient_descent(M, f, grad_f, p0;\n retraction_method = ProjectionRetraction(), # state, that we use the retraction from above\n stepsize = DecreasingLength(M; length=1.0), # A simple step size\n stopping_criterion = StopAfterIteration(10), # A simple stopping criterion\n X = zeros(d+1), # how we define/represent tangent vectors\n)\nf(M,q1)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"162.4000287847332","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We at least see, that the function value decreased.","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Norm-and-maximal-step-size","page":"Optimize on your own manifold","title":"Norm and maximal step size","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"To use more advanced stopping criteria and step sizes we first need an inner(M, p, X). We also need a max_stepsize(M), to avoid having too large steps on positively curved manifolds like our scaled sphere in this example","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"import ManifoldsBase: inner\nimport Manopt: max_stepsize\ninner(M::ScaledSphere, p, X,Y) = dot(X,Y) # inherited from the embedding\n # set the maximal allowed stepsize to injectivity radius.\nManopt.max_stepsize(M::ScaledSphere) = M.radius*π","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Then we can use the default step size (ArmijoLinesearch) and the default stopping criterion, which checks for a small gradient Norm","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"q2 = gradient_descent(M, f, grad_f, p0;\n retraction_method = ProjectionRetraction(), # as before\n X = zeros(d+1), # as before\n)\nf(M, q2)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"9.772830131357034","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Making-life-easier:-default-retraction-and-zero-vector","page":"Optimize on your own manifold","title":"Making life easier: default retraction and zero vector","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"To initialize tangent vector memory, the function zero_vector(M,p) is called. Similarly, the most-used retraction is returned by default_retraction_method","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We can use both here, to make subsequent calls to the solver less verbose. We define","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"import ManifoldsBase: zero_vector, default_retraction_method\nzero_vector(M::ScaledSphere, p) = zeros(M.dimension+1)\ndefault_retraction_method(M::ScaledSphere) = ProjectionRetraction()","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"default_retraction_method (generic function with 19 methods)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"and now we can even just call","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"q3 = gradient_descent(M, f, grad_f, p0)\nf(M, q3)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"9.772830131357034","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"But we for example automatically also get the possibility to obtain debug information like","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"gradient_descent(M, f, grad_f, p0; debug = [:Iteration, :Cost, :Stepsize, 25, :GradientNorm, :Stop, \"\\n\"]);","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Initial f(x): 444.603746\n# 25 f(x): 9.772833s:0.018299583806109226|grad f(p)|:0.020516914880881486\n# 50 f(x): 9.772830s:0.018299583806109226|grad f(p)|:0.00013449321419330018\nThe algorithm reached approximately critical point after 72 iterations; the gradient norm (9.20733514568335e-9) is less than 1.0e-8.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"see How to Print Debug Output for more details.","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Technical-details","page":"Optimize on your own manifold","title":"Technical details","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `..`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"2024-11-21T20:38:39.906","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Literature","page":"Optimize on your own manifold","title":"Literature","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"H. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).\n\n\n\n","category":"page"},{"location":"solvers/subgradient/#sec-subgradient-method","page":"Subgradient method","title":"Subgradient method","text":"","category":"section"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"subgradient_method\nsubgradient_method!","category":"page"},{"location":"solvers/subgradient/#Manopt.subgradient_method","page":"Subgradient method","title":"Manopt.subgradient_method","text":"subgradient_method(M, f, ∂f, p=rand(M); kwargs...)\nsubgradient_method(M, sgo, p=rand(M); kwargs...)\nsubgradient_method!(M, f, ∂f, p; kwargs...)\nsubgradient_method!(M, sgo, p; kwargs...)\n\nperform a subgradient method p^(k+1) = operatornameretrbigl(p^(k) s^(k)f(p^(k))bigr), where operatornameretr is a retraction, s^(k) is a step size.\n\nThough the subgradient might be set valued, the argument ∂f should always return one element from the subgradient, but not necessarily deterministic. For more details see [FO98].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: the (sub)gradient f mathcal M Tmathcal M of f\np: a point on the manifold mathcal M\n\nalternatively to f and ∂f a ManifoldSubgradientObjective sgo can be provided.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=default_stepsize(M, SubGradientMethodState): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nand the ones that are passed to decorate_state! for decorators.\n\nOutput\n\nthe obtained (approximate) minimizer p^*, see get_solver_return for details\n\n\n\n\n\n","category":"function"},{"location":"solvers/subgradient/#Manopt.subgradient_method!","page":"Subgradient method","title":"Manopt.subgradient_method!","text":"subgradient_method(M, f, ∂f, p=rand(M); kwargs...)\nsubgradient_method(M, sgo, p=rand(M); kwargs...)\nsubgradient_method!(M, f, ∂f, p; kwargs...)\nsubgradient_method!(M, sgo, p; kwargs...)\n\nperform a subgradient method p^(k+1) = operatornameretrbigl(p^(k) s^(k)f(p^(k))bigr), where operatornameretr is a retraction, s^(k) is a step size.\n\nThough the subgradient might be set valued, the argument ∂f should always return one element from the subgradient, but not necessarily deterministic. For more details see [FO98].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: the (sub)gradient f mathcal M Tmathcal M of f\np: a point on the manifold mathcal M\n\nalternatively to f and ∂f a ManifoldSubgradientObjective sgo can be provided.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=default_stepsize(M, SubGradientMethodState): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nand the ones that are passed to decorate_state! for decorators.\n\nOutput\n\nthe obtained (approximate) minimizer p^*, see get_solver_return for details\n\n\n\n\n\n","category":"function"},{"location":"solvers/subgradient/#State","page":"Subgradient method","title":"State","text":"","category":"section"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"SubGradientMethodState","category":"page"},{"location":"solvers/subgradient/#Manopt.SubGradientMethodState","page":"Subgradient method","title":"Manopt.SubGradientMethodState","text":"SubGradientMethodState <: AbstractManoptSolverState\n\nstores option values for a subgradient_method solver\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\np_star: optimal value\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nX: the current element from the possible subgradients at p that was last evaluated.\n\nConstructor\n\nSubGradientMethodState(M::AbstractManifold; kwargs...)\n\nInitialise the Subgradient method state\n\nKeyword arguments\n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstepsize=default_stepsize(M, SubGradientMethodState): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\n\n\n\n\n","category":"type"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"For DebugActions and RecordActions to record (sub)gradient, its norm and the step sizes, see the gradient descent actions.","category":"page"},{"location":"solvers/subgradient/#sec-sgm-technical-details","page":"Subgradient method","title":"Technical details","text":"","category":"section"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"The subgradient_method solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.","category":"page"},{"location":"solvers/subgradient/#Literature","page":"Subgradient method","title":"Literature","text":"","category":"section"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"O. Ferreira and P. R. Oliveira. Subgradient algorithm on Riemannian manifolds. Journal of Optimization Theory and Applications 97, 93–104 (1998).\n\n\n\n","category":"page"},{"location":"solvers/augmented_Lagrangian_method/#Augmented-Lagrangian-method","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian method","text":"","category":"section"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":" augmented_Lagrangian_method\n augmented_Lagrangian_method!","category":"page"},{"location":"solvers/augmented_Lagrangian_method/#Manopt.augmented_Lagrangian_method","page":"Augmented Lagrangian Method","title":"Manopt.augmented_Lagrangian_method","text":"augmented_Lagrangian_method(M, f, grad_f, p=rand(M); kwargs...)\naugmented_Lagrangian_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\naugmented_Lagrangian_method!(M, f, grad_f, p; kwargs...)\naugmented_Lagrangian_method!(M, cmo::ConstrainedManifoldObjective, p; kwargs...)\n\nperform the augmented Lagrangian method (ALM) [LB19]. This method can work in-place of p.\n\nThe aim of the ALM is to find the solution of the constrained optimisation task\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nwhere M is a Riemannian manifold, and f, g_i_i=1^n and h_j_j=1^m are twice continuously differentiable functions from M to ℝ. In every step k of the algorithm, the AugmentedLagrangianCost mathcal L_ρ^(k)(p μ^(k) λ^(k)) is minimized on \\mathcal M, where μ^(k) ℝ^n and λ^(k) ℝ^m are the current iterates of the Lagrange multipliers and ρ^(k) is the current penalty parameter.\n\nThe Lagrange multipliers are then updated by\n\nλ_j^(k+1) =operatornameclip_λ_minλ_max (λ_j^(k) + ρ^(k) h_j(p^(k+1))) textfor all j=1p\n\nand\n\nμ_i^(k+1) =operatornameclip_0μ_max (μ_i^(k) + ρ^(k) g_i(p^(k+1))) text for all i=1m\n\nwhere λ_textmin λ_textmax and μ_textmax are the multiplier boundaries.\n\nNext, the accuracy tolerance ϵ is updated as\n\nϵ^(k)=maxϵ_min θ_ϵ ϵ^(k-1)\n\nwhere ϵ_textmin is the lowest value ϵ is allowed to become and θ_ϵ (01) is constant scaling factor.\n\nLast, the penalty parameter ρ is updated as follows: with\n\nσ^(k)=max_j=1p i=1m h_j(p^(k)) max_i=1mg_i(p^(k)) -fracμ_i^(k-1)ρ^(k-1) \n\nρ is updated as\n\nρ^(k) = begincases\nρ^(k-1)θ_ρ textif σ^(k)leq θ_ρ σ^(k-1) \nρ^(k-1) textelse\nendcases\n\nwhere θ_ρ (01) is a constant scaling factor.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\n\nOptional (if not called with the ConstrainedManifoldObjective cmo)\n\ng=nothing: the inequality constraints\nh=nothing: the equality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\n\nNote that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.\n\nKeyword Arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nϵ=1e-3: the accuracy tolerance\nϵ_min=1e-6: the lower bound for the accuracy tolerance\nϵ_exponent=1/100: exponent of the ϵ update factor; also 1/number of iterations until maximal accuracy is needed to end algorithm naturally\nequality_constraints=nothing: the number n of equality constraints.\nIf not provided, a call to the gradient of g is performed to estimate these.\ngradient_range=nothing: specify how both gradients of the constraints are represented\ngradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.\ngradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.\ninequality_constraints=nothing: the number m of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.\nλ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the equality constraints\nλ_max=20.0: an upper bound for the Lagrange multiplier belonging to the equality constraints\nλ_min=- λ_max: a lower bound for the Lagrange multiplier belonging to the equality constraints\nμ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the inequality constraints\nμ_max=20.0: an upper bound for the Lagrange multiplier belonging to the inequality constraints\nρ=1.0: the penalty parameter\nτ=0.8: factor for the improvement of the evaluation of the penalty parameter\nθ_ρ=0.3: the scaling factor of the penalty parameter\nθ_ϵ=(ϵ_min / ϵ)^(ϵ_exponent): the scaling factor of the exactness\nsub_cost=[AugmentedLagrangianCost± (@ref)(cmo, ρ, μ, λ): use augmented Lagrangian cost, based on the ConstrainedManifoldObjective build from the functions provided. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_grad=[AugmentedLagrangianGrad](@ref)(cmo, ρ, μ, λ): use augmented Lagrangian gradient, based on the [ConstrainedManifoldObjective](@ref) build from the functions provided. This is used to define thesubproblem=keyword and has hence no effect, if you setsubproblem` directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nstopping_criterion=StopAfterIteration(300)|(`StopWhenSmallerOrEqual(:ϵ, ϵ_min)&StopWhenChangeLess(1e-10) )[ | ](@ref StopWhenAny)[StopWhenChangeLess](@ref): a functor indicating that the stopping criterion is fulfilled\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.as the quasi newton method, the QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used.\n`substoppingcriterion::StoppingCriterion=StopAfterIteration(300) | StopWhenGradientNormLess(ϵ) | StopWhenStepsizeLess(1e-8),\n\nFor the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/augmented_Lagrangian_method/#Manopt.augmented_Lagrangian_method!","page":"Augmented Lagrangian Method","title":"Manopt.augmented_Lagrangian_method!","text":"augmented_Lagrangian_method(M, f, grad_f, p=rand(M); kwargs...)\naugmented_Lagrangian_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\naugmented_Lagrangian_method!(M, f, grad_f, p; kwargs...)\naugmented_Lagrangian_method!(M, cmo::ConstrainedManifoldObjective, p; kwargs...)\n\nperform the augmented Lagrangian method (ALM) [LB19]. This method can work in-place of p.\n\nThe aim of the ALM is to find the solution of the constrained optimisation task\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nwhere M is a Riemannian manifold, and f, g_i_i=1^n and h_j_j=1^m are twice continuously differentiable functions from M to ℝ. In every step k of the algorithm, the AugmentedLagrangianCost mathcal L_ρ^(k)(p μ^(k) λ^(k)) is minimized on \\mathcal M, where μ^(k) ℝ^n and λ^(k) ℝ^m are the current iterates of the Lagrange multipliers and ρ^(k) is the current penalty parameter.\n\nThe Lagrange multipliers are then updated by\n\nλ_j^(k+1) =operatornameclip_λ_minλ_max (λ_j^(k) + ρ^(k) h_j(p^(k+1))) textfor all j=1p\n\nand\n\nμ_i^(k+1) =operatornameclip_0μ_max (μ_i^(k) + ρ^(k) g_i(p^(k+1))) text for all i=1m\n\nwhere λ_textmin λ_textmax and μ_textmax are the multiplier boundaries.\n\nNext, the accuracy tolerance ϵ is updated as\n\nϵ^(k)=maxϵ_min θ_ϵ ϵ^(k-1)\n\nwhere ϵ_textmin is the lowest value ϵ is allowed to become and θ_ϵ (01) is constant scaling factor.\n\nLast, the penalty parameter ρ is updated as follows: with\n\nσ^(k)=max_j=1p i=1m h_j(p^(k)) max_i=1mg_i(p^(k)) -fracμ_i^(k-1)ρ^(k-1) \n\nρ is updated as\n\nρ^(k) = begincases\nρ^(k-1)θ_ρ textif σ^(k)leq θ_ρ σ^(k-1) \nρ^(k-1) textelse\nendcases\n\nwhere θ_ρ (01) is a constant scaling factor.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\n\nOptional (if not called with the ConstrainedManifoldObjective cmo)\n\ng=nothing: the inequality constraints\nh=nothing: the equality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\n\nNote that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.\n\nKeyword Arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nϵ=1e-3: the accuracy tolerance\nϵ_min=1e-6: the lower bound for the accuracy tolerance\nϵ_exponent=1/100: exponent of the ϵ update factor; also 1/number of iterations until maximal accuracy is needed to end algorithm naturally\nequality_constraints=nothing: the number n of equality constraints.\nIf not provided, a call to the gradient of g is performed to estimate these.\ngradient_range=nothing: specify how both gradients of the constraints are represented\ngradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.\ngradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.\ninequality_constraints=nothing: the number m of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.\nλ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the equality constraints\nλ_max=20.0: an upper bound for the Lagrange multiplier belonging to the equality constraints\nλ_min=- λ_max: a lower bound for the Lagrange multiplier belonging to the equality constraints\nμ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the inequality constraints\nμ_max=20.0: an upper bound for the Lagrange multiplier belonging to the inequality constraints\nρ=1.0: the penalty parameter\nτ=0.8: factor for the improvement of the evaluation of the penalty parameter\nθ_ρ=0.3: the scaling factor of the penalty parameter\nθ_ϵ=(ϵ_min / ϵ)^(ϵ_exponent): the scaling factor of the exactness\nsub_cost=[AugmentedLagrangianCost± (@ref)(cmo, ρ, μ, λ): use augmented Lagrangian cost, based on the ConstrainedManifoldObjective build from the functions provided. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_grad=[AugmentedLagrangianGrad](@ref)(cmo, ρ, μ, λ): use augmented Lagrangian gradient, based on the [ConstrainedManifoldObjective](@ref) build from the functions provided. This is used to define thesubproblem=keyword and has hence no effect, if you setsubproblem` directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nstopping_criterion=StopAfterIteration(300)|(`StopWhenSmallerOrEqual(:ϵ, ϵ_min)&StopWhenChangeLess(1e-10) )[ | ](@ref StopWhenAny)[StopWhenChangeLess](@ref): a functor indicating that the stopping criterion is fulfilled\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.as the quasi newton method, the QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used.\n`substoppingcriterion::StoppingCriterion=StopAfterIteration(300) | StopWhenGradientNormLess(ϵ) | StopWhenStepsizeLess(1e-8),\n\nFor the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/augmented_Lagrangian_method/#State","page":"Augmented Lagrangian Method","title":"State","text":"","category":"section"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"AugmentedLagrangianMethodState","category":"page"},{"location":"solvers/augmented_Lagrangian_method/#Manopt.AugmentedLagrangianMethodState","page":"Augmented Lagrangian Method","title":"Manopt.AugmentedLagrangianMethodState","text":"AugmentedLagrangianMethodState{P,T} <: AbstractManoptSolverState\n\nDescribes the augmented Lagrangian method, with\n\nFields\n\na default value is given in brackets if a parameter can be left out in initialization.\n\nϵ: the accuracy tolerance\nϵ_min: the lower bound for the accuracy tolerance\nλ: the Lagrange multiplier with respect to the equality constraints\nλ_max: an upper bound for the Lagrange multiplier belonging to the equality constraints\nλ_min: a lower bound for the Lagrange multiplier belonging to the equality constraints\np::P: a point on the manifold mathcal Mstoring the current iterate\npenalty: evaluation of the current penalty term, initialized to Inf.\nμ: the Lagrange multiplier with respect to the inequality constraints\nμ_max: an upper bound for the Lagrange multiplier belonging to the inequality constraints\nρ: the penalty parameter\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nτ: factor for the improvement of the evaluation of the penalty parameter\nθ_ρ: the scaling factor of the penalty parameter\nθ_ϵ: the scaling factor of the accuracy tolerance\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\n\nConstructor\n\nAugmentedLagrangianMethodState(M::AbstractManifold, co::ConstrainedManifoldObjective,\n sub_problem, sub_state; kwargs...\n)\n\nconstruct an augmented Lagrangian method options, where the manifold M and the ConstrainedManifoldObjective co are used for manifold- or objective specific defaults.\n\nAugmentedLagrangianMethodState(M::AbstractManifold, co::ConstrainedManifoldObjective,\n sub_problem; evaluation=AllocatingEvaluation(), kwargs...\n)\n\nconstruct an augmented Lagrangian method options, where the manifold M and the ConstrainedManifoldObjective co are used for manifold- or objective specific defaults, and sub_problem is a closed form solution with evaluation as type of evaluation.\n\nKeyword arguments\n\nthe following keyword arguments are available to initialise the corresponding fields\n\nϵ=1e–3\nϵ_min=1e-6\nλ=ones(n): n is the number of equality constraints in the ConstrainedManifoldObjective co.\nλ_max=20.0\nλ_min=- λ_max\nμ=ones(m): m is the number of inequality constraints in the ConstrainedManifoldObjective co.\nμ_max=20.0\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nρ=1.0\nτ=0.8\nθ_ρ=0.3\nθ_ϵ=(ϵ_min/ϵ)^(ϵ_exponent)\nstoppingcriterion=StopAfterIteration(300)|(`StopWhenSmallerOrEqual`(:ϵ, ϵmin)[ & ](@ref StopWhenAll)[StopWhenChangeLess](@ref)(1e-10) )|StopWhenChangeLess`.\n\nSee also\n\naugmented_Lagrangian_method\n\n\n\n\n\n","category":"type"},{"location":"solvers/augmented_Lagrangian_method/#Helping-functions","page":"Augmented Lagrangian Method","title":"Helping functions","text":"","category":"section"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"AugmentedLagrangianCost\nAugmentedLagrangianGrad","category":"page"},{"location":"solvers/augmented_Lagrangian_method/#Manopt.AugmentedLagrangianCost","page":"Augmented Lagrangian Method","title":"Manopt.AugmentedLagrangianCost","text":"AugmentedLagrangianCost{CO,R,T}\n\nStores the parameters ρ ℝ, μ ℝ^m, λ ℝ^n of the augmented Lagrangian associated to the ConstrainedManifoldObjective co.\n\nThis struct is also a functor (M,p) -> v that can be used as a cost function within a solver, based on the internal ConstrainedManifoldObjective it computes\n\nmathcal L_rho(p μ λ)\n= f(x) + fracρ2 biggl(\n sum_j=1^n Bigl( h_j(p) + fracλ_jρ Bigr)^2\n +\n sum_i=1^m maxBigl 0 fracμ_iρ + g_i(p) Bigr^2\nBigr)\n\nFields\n\nco::CO, ρ::R, μ::T, λ::T as mentioned in the formula, where R should be the\n\nnumber type used and T the vector type.\n\nConstructor\n\nAugmentedLagrangianCost(co, ρ, μ, λ)\n\n\n\n\n\n","category":"type"},{"location":"solvers/augmented_Lagrangian_method/#Manopt.AugmentedLagrangianGrad","page":"Augmented Lagrangian Method","title":"Manopt.AugmentedLagrangianGrad","text":"AugmentedLagrangianGrad{CO,R,T} <: AbstractConstrainedFunctor{T}\n\nStores the parameters ρ ℝ, μ ℝ^m, λ ℝ^n of the augmented Lagrangian associated to the ConstrainedManifoldObjective co.\n\nThis struct is also a functor in both formats\n\n(M, p) -> X to compute the gradient in allocating fashion.\n(M, X, p) to compute the gradient in in-place fashion.\n\nadditionally this gradient does accept a positional last argument to specify the range for the internal gradient call of the constrained objective.\n\nbased on the internal ConstrainedManifoldObjective and computes the gradient $(_tex(:grad))$(_tex(:Cal, \"L\"))_{ρ}(p, μ, λ), see also [AugmentedLagrangianCost`](@ref).\n\nFields\n\nco::CO, ρ::R, μ::T, λ::T as mentioned in the formula, where R should be the\n\nnumber type used and T the vector type.\n\nConstructor\n\nAugmentedLagrangianGrad(co, ρ, μ, λ)\n\n\n\n\n\n","category":"type"},{"location":"solvers/augmented_Lagrangian_method/#sec-agd-technical-details","page":"Augmented Lagrangian Method","title":"Technical details","text":"","category":"section"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"The augmented_Lagrangian_method solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"A `copyto!(M, q, p) and copy(M,p) for points.\nEverything the subsolver requires, which by default is the quasi_Newton method\nA zero_vector(M,p).","category":"page"},{"location":"solvers/augmented_Lagrangian_method/#Literature","page":"Augmented Lagrangian Method","title":"Literature","text":"","category":"section"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.\n\n\n\n","category":"page"},{"location":"solvers/cma_es/#Covariance-matrix-adaptation-evolutionary-strategy","page":"CMA-ES","title":"Covariance matrix adaptation evolutionary strategy","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"The CMA-ES algorithm has been implemented based on [Han23] with basic Riemannian adaptations, related to transport of covariance matrix and its update vectors. Other attempts at adapting CMA-ES to Riemannian optimization include [CFFS10]. The algorithm is suitable for global optimization.","category":"page"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"Covariance matrix transport between consecutive mean points is handled by eigenvector_transport! function which is based on the idea of transport of matrix eigenvectors.","category":"page"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"cma_es","category":"page"},{"location":"solvers/cma_es/#Manopt.cma_es","page":"CMA-ES","title":"Manopt.cma_es","text":"cma_es(M, f, p_m=rand(M); σ::Real=1.0, kwargs...)\n\nPerform covariance matrix adaptation evolutionary strategy search for global gradient-free randomized optimization. It is suitable for complicated non-convex functions. It can be reasonably expected to find global minimum within 3σ distance from p_m.\n\nImplementation is based on [Han23] with basic adaptations to the Riemannian setting.\n\nInput\n\nM: a manifold mathcal M\nf: a cost function f mathcal Mℝ to find a minimizer p^* for\n\nKeyword arguments\n\np_m=rand(M): an initial point p\nσ=1.0: initial standard deviation\nλ: (4 + Int(floor(3 * log(manifold_dimension(M))))population size (can be increased for a more thorough global search but decreasing is not recommended)\ntol_fun=1e-12: tolerance for the StopWhenPopulationCostConcentrated, similar to absolute difference between function values at subsequent points\ntol_x=1e-12: tolerance for the StopWhenPopulationStronglyConcentrated, similar to absolute difference between subsequent point but actually computed from distribution parameters.\nstopping_criterion=default_cma_es_stopping_criterion(M, λ; tol_fun=tol_fun, tol_x=tol_x): a functor indicating that the stopping criterion is fulfilled\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nbasis (DefaultOrthonormalBasis()) basis used to represent covariance in\nrng=default_rng(): random number generator for generating new points on M\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/cma_es/#State","page":"CMA-ES","title":"State","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"CMAESState","category":"page"},{"location":"solvers/cma_es/#Manopt.CMAESState","page":"CMA-ES","title":"Manopt.CMAESState","text":"CMAESState{P,T} <: AbstractManoptSolverState\n\nState of covariance matrix adaptation evolution strategy.\n\nFields\n\np::P: a point on the manifold mathcal M storing the best point found so far\np_obj objective value at p\nμ parent number\nλ population size\nμ_eff variance effective selection mass for the mean\nc_1 learning rate for the rank-one update\nc_c decay rate for cumulation path for the rank-one update\nc_μ learning rate for the rank-μ update\nc_σ decay rate for the cumulation path for the step-size control\nc_m learning rate for the mean\nd_σ damping parameter for step-size update\npopulation population of the current generation\nys_c coordinates of random vectors for the current generation\ncovariance_matrix coordinates of the covariance matrix\ncovariance_matrix_eigen eigen decomposition of covariance_matrix\ncovariance_matrix_cond condition number of covariance_matrix, updated after eigen decomposition\nbest_fitness_current_gen best fitness value of individuals in the current generation\nmedian_fitness_current_gen median fitness value of individuals in the current generation\nworst_fitness_current_gen worst fitness value of individuals in the current generation\np_m point around which the search for new candidates is done\nσ step size\np_σ coordinates of a vector in T_p_mmathcal M\np_c coordinates of a vector in T_p_mmathcal M\ndeviations standard deviations of coordinate RNG\nbuffer buffer for random number generation and wmean_y_c of length n_coords\ne_mv_norm expected value of norm of the n_coords-variable standard normal distribution\nrecombination_weights recombination weights used for updating covariance matrix\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nbasis a real coefficient basis for covariance matrix\nrng RNG for generating new points\n\nConstructor\n\nCMAESState(\n M::AbstractManifold,\n p_m::P,\n μ::Int,\n λ::Int,\n μ_eff::TParams,\n c_1::TParams,\n c_c::TParams,\n c_μ::TParams,\n c_σ::TParams,\n c_m::TParams,\n d_σ::TParams,\n stop::TStopping,\n covariance_matrix::Matrix{TParams},\n σ::TParams,\n recombination_weights::Vector{TParams};\n retraction_method::TRetraction=default_retraction_method(M, typeof(p_m)),\n vector_transport_method::TVTM=default_vector_transport_method(M, typeof(p_m)),\n basis::TB=DefaultOrthonormalBasis(),\n rng::TRng=default_rng(),\n) where {\n P,\n TParams<:Real,\n TStopping<:StoppingCriterion,\n TRetraction<:AbstractRetractionMethod,\n TVTM<:AbstractVectorTransportMethod,\n TB<:AbstractBasis,\n TRng<:AbstractRNG,\n}\n\nSee also\n\ncma_es\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Stopping-criteria","page":"CMA-ES","title":"Stopping criteria","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"StopWhenBestCostInGenerationConstant\nStopWhenCovarianceIllConditioned\nStopWhenEvolutionStagnates\nStopWhenPopulationCostConcentrated\nStopWhenPopulationDiverges\nStopWhenPopulationStronglyConcentrated","category":"page"},{"location":"solvers/cma_es/#Manopt.StopWhenBestCostInGenerationConstant","page":"CMA-ES","title":"Manopt.StopWhenBestCostInGenerationConstant","text":"StopWhenBestCostInGenerationConstant <: StoppingCriterion\n\nStop if the range of the best objective function values of the last iteration_range generations is zero. This corresponds to EqualFUnValues condition from [Han23].\n\nSee also StopWhenPopulationCostConcentrated.\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Manopt.StopWhenCovarianceIllConditioned","page":"CMA-ES","title":"Manopt.StopWhenCovarianceIllConditioned","text":"StopWhenCovarianceIllConditioned <: StoppingCriterion\n\nStop CMA-ES if condition number of covariance matrix exceeds threshold. This corresponds to ConditionCov condition from [Han23].\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Manopt.StopWhenEvolutionStagnates","page":"CMA-ES","title":"Manopt.StopWhenEvolutionStagnates","text":"StopWhenEvolutionStagnates{TParam<:Real} <: StoppingCriterion\n\nThe best and median fitness in each iteration is tracked over the last 20% but at least min_size and no more than max_size iterations. Solver is stopped if in both histories the median of the most recent fraction of values is not better than the median of the oldest fraction.\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Manopt.StopWhenPopulationCostConcentrated","page":"CMA-ES","title":"Manopt.StopWhenPopulationCostConcentrated","text":"StopWhenPopulationCostConcentrated{TParam<:Real} <: StoppingCriterion\n\nStop if the range of the best objective function value in the last max_size generations and all function values in the current generation is below tol. This corresponds to TolFun condition from [Han23].\n\nConstructor\n\nStopWhenPopulationCostConcentrated(tol::Real, max_size::Int)\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Manopt.StopWhenPopulationDiverges","page":"CMA-ES","title":"Manopt.StopWhenPopulationDiverges","text":"StopWhenPopulationDiverges{TParam<:Real} <: StoppingCriterion\n\nStop if σ times maximum deviation increased by more than tol. This usually indicates a far too small σ, or divergent behavior. This corresponds to TolXUp condition from [Han23].\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Manopt.StopWhenPopulationStronglyConcentrated","page":"CMA-ES","title":"Manopt.StopWhenPopulationStronglyConcentrated","text":"StopWhenPopulationStronglyConcentrated{TParam<:Real} <: StoppingCriterion\n\nStop if the standard deviation in all coordinates is smaller than tol and norm of σ * p_c is smaller than tol. This corresponds to TolX condition from [Han23].\n\nFields\n\ntol the tolerance to verify against\nat_iteration an internal field to indicate at with iteration i geq 0 the tolerance was met.\n\nConstructor\n\nStopWhenPopulationStronglyConcentrated(tol::Real)\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#sec-cma-es-technical-details","page":"CMA-ES","title":"Technical details","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"The cma_es solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= does not have to be specified.\nA copyto!(M, q, p) and copy(M,p) for points and similarly copy(M, p, X) for tangent vectors.\nget_coordinates!(M, Y, p, X, b) and get_vector!(M, X, p, c, b) with respect to the AbstractBasis b provided, which is DefaultOrthonormalBasis by default from the basis= keyword.\nAn is_flat(M).","category":"page"},{"location":"solvers/cma_es/#Internal-helpers","page":"CMA-ES","title":"Internal helpers","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"You may add new methods to eigenvector_transport! if you know a more optimized implementation for your manifold.","category":"page"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"Manopt.eigenvector_transport!","category":"page"},{"location":"solvers/cma_es/#Manopt.eigenvector_transport!","page":"CMA-ES","title":"Manopt.eigenvector_transport!","text":"eigenvector_transport!(\n M::AbstractManifold,\n matrix_eigen::Eigen,\n p,\n q,\n basis::AbstractBasis,\n vtm::AbstractVectorTransportMethod,\n)\n\nTransport the matrix with matrix_eig eigen decomposition when expanded in basis from point p to point q on M. Update matrix_eigen in-place.\n\n(p, matrix_eig) belongs to the fiber bundle of B = mathcal M SPD(n), where n is the (real) dimension of M. The function corresponds to the Ehresmann connection defined by vector transport vtm of eigenvectors of matrix_eigen.\n\n\n\n\n\n","category":"function"},{"location":"solvers/cma_es/#Literature","page":"CMA-ES","title":"Literature","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"S. Colutto, F. Fruhauf, M. Fuchs and O. Scherzer. The CMA-ES on Riemannian Manifolds to Reconstruct Shapes in 3-D Voxel Images. IEEE Transactions on Evolutionary Computation 14, 227–245 (2010).\n\n\n\nN. Hansen. The CMA Evolution Strategy: A Tutorial. ArXiv Preprint (2023).\n\n\n\n","category":"page"},{"location":"plans/record/#sec-record","page":"Recording values","title":"Record values","text":"","category":"section"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"To record values during the iterations of a solver run, there are in general two possibilities. On the one hand, the high-level interfaces provide a record= keyword, that accepts several different inputs. For more details see How to record.","category":"page"},{"location":"plans/record/#subsec-record-states","page":"Recording values","title":"Record Actions & the solver state decorator","text":"","category":"section"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"Modules = [Manopt]\nPages = [\"plans/record.jl\"]\nOrder = [:type]","category":"page"},{"location":"plans/record/#Manopt.RecordAction","page":"Recording values","title":"Manopt.RecordAction","text":"RecordAction\n\nA RecordAction is a small functor to record values. The usual call is given by\n\n(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, k) -> s\n\nthat performs the record for the current problem and solver combination, and where k is the current iteration.\n\nBy convention i=0 is interpreted as \"For Initialization only,\" so only initialize internal values, but not trigger any record, that the record is called from within stop_solver! which returns true afterwards.\n\nAny negative value is interpreted as a “reset”, and should hence delete all stored recordings, for example when reusing a RecordAction. The start of a solver calls the :Iteration and :Stop dictionary entries with -1, to reset those recordings.\n\nBy default any RecordAction is assumed to record its values in a field recorded_values, an Vector of recorded values. See get_record(ra).\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordChange","page":"Recording values","title":"Manopt.RecordChange","text":"RecordChange <: RecordAction\n\ndebug for the amount of change of the iterate (see get_iterate(s) of the AbstractManoptSolverState) during the last iteration.\n\nFields\n\nstorage : a StoreStateAction to store (at least) the last iterate to use this as the last value (to compute the change) serving as a potential cache shared with other components of the solver.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nrecorded_values : to store the recorded values\n\nConstructor\n\nRecordChange(M=DefaultManifold();\n inverse_retraction_method = default_inverse_retraction_method(M),\n storage = StoreStateAction(M; store_points=Tuple{:Iterate})\n)\n\nwith the previous fields as keywords. For the DefaultManifold only the field storage is used. Providing the actual manifold moves the default storage to the efficient point storage.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordCost","page":"Recording values","title":"Manopt.RecordCost","text":"RecordCost <: RecordAction\n\nRecord the current cost function value, see get_cost.\n\nFields\n\nrecorded_values : to store the recorded values\n\nConstructor\n\nRecordCost()\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordEntry","page":"Recording values","title":"Manopt.RecordEntry","text":"RecordEntry{T} <: RecordAction\n\nrecord a certain fields entry of type {T} during the iterates\n\nFields\n\nrecorded_values : the recorded Iterates\nfield : Symbol the entry can be accessed with within AbstractManoptSolverState\n\nConstructor\n\nRecordEntry(::T, f::Symbol)\nRecordEntry(T::DataType, f::Symbol)\n\nInitialize the record action to record the state field f, and initialize the recorded_values to be a vector of element type T.\n\nExamples\n\nRecordEntry(rand(M), :q) to record the points from M stored in some states s.q\nRecordEntry(SVDMPoint, :p) to record the field s.p which takes values of type SVDMPoint.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordEntryChange","page":"Recording values","title":"Manopt.RecordEntryChange","text":"RecordEntryChange{T} <: RecordAction\n\nrecord a certain entries change during iterates\n\nAdditional fields\n\nrecorded_values : the recorded Iterates\nfield : Symbol the field can be accessed with within AbstractManoptSolverState\ndistance : function (p,o,x1,x2) to compute the change/distance between two values of the entry\nstorage : a StoreStateAction to store (at least) getproperty(o, d.field)\n\nConstructor\n\nRecordEntryChange(f::Symbol, d, a::StoreStateAction=StoreStateAction([f]))\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordEvery","page":"Recording values","title":"Manopt.RecordEvery","text":"RecordEvery <: RecordAction\n\nrecord only every kth iteration. Otherwise (optionally, but activated by default) just update internal tracking values.\n\nThis method does not perform any record itself but relies on it's children's methods\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordGroup","page":"Recording values","title":"Manopt.RecordGroup","text":"RecordGroup <: RecordAction\n\ngroup a set of RecordActions into one action, where the internal RecordActions act independently, but the results can be collected in a grouped fashion, a tuple per calls of this group. The entries can be later addressed either by index or semantic Symbols\n\nConstructors\n\nRecordGroup(g::Array{<:RecordAction, 1})\n\nconstruct a group consisting of an Array of RecordActions g,\n\nRecordGroup(g, symbols)\n\nExamples\n\ng1 = RecordGroup([RecordIteration(), RecordCost()])\n\nA RecordGroup to record the current iteration and the cost. The cost can then be accessed using get_record(r,2) or r[2].\n\ng2 = RecordGroup([RecordIteration(), RecordCost()], Dict(:Cost => 2))\n\nA RecordGroup to record the current iteration and the cost, which can then be accessed using get_record(:Cost) or r[:Cost].\n\ng3 = RecordGroup([RecordIteration(), RecordCost() => :Cost])\n\nA RecordGroup identical to the previous constructor, just a little easier to use. To access all recordings of the second entry of this last g3 you can do either g4[2] or g[:Cost], the first one can only be accessed by g4[1], since no symbol was given here.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordIterate","page":"Recording values","title":"Manopt.RecordIterate","text":"RecordIterate <: RecordAction\n\nrecord the iterate\n\nConstructors\n\nRecordIterate(x0)\n\ninitialize the iterate record array to the type of x0, which indicates the kind of iterate\n\nRecordIterate(P)\n\ninitialize the iterate record array to the data type T.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordIteration","page":"Recording values","title":"Manopt.RecordIteration","text":"RecordIteration <: RecordAction\n\nrecord the current iteration\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordSolverState","page":"Recording values","title":"Manopt.RecordSolverState","text":"RecordSolverState <: AbstractManoptSolverState\n\nappend to any AbstractManoptSolverState the decorator with record capability, Internally a dictionary is kept that stores a RecordAction for several concurrent modes using a Symbol as reference. The default mode is :Iteration, which is used to store information that is recorded during the iterations. RecordActions might be added to :Start or :Stop to record values at the beginning or for the stopping time point, respectively\n\nThe original options can still be accessed using the get_state function.\n\nFields\n\noptions the options that are extended by debug information\nrecordDictionary a Dict{Symbol,RecordAction} to keep track of all different recorded values\n\nConstructors\n\nRecordSolverState(o,dR)\n\nconstruct record decorated AbstractManoptSolverState, where dR can be\n\na RecordAction, then it is stored within the dictionary at :Iteration\nan Array of RecordActions, then it is stored as a recordDictionary(@ref).\na Dict{Symbol,RecordAction}.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordStoppingReason","page":"Recording values","title":"Manopt.RecordStoppingReason","text":"RecordStoppingReason <: RecordAction\n\nRecord reason the solver stopped, see get_reason.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordSubsolver","page":"Recording values","title":"Manopt.RecordSubsolver","text":"RecordSubsolver <: RecordAction\n\nRecord the current subsolvers recording, by calling get_record on the sub state with\n\nFields\n\nrecords: an array to store the recorded values\nsymbols: arguments for get_record. Defaults to just one symbol :Iteration, but could be set to also record the :Stop action.\n\nConstructor\n\nRecordSubsolver(; record=[:Iteration,], record_type=eltype([]))\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordTime","page":"Recording values","title":"Manopt.RecordTime","text":"RecordTime <: RecordAction\n\nrecord the time elapsed during the current iteration.\n\nThe three possible modes are\n\n:cumulative record times without resetting the timer\n:iterative record times with resetting the timer\n:total record a time only at the end of an algorithm (see stop_solver!)\n\nThe default is :cumulative, and any non-listed symbol default to using this mode.\n\nConstructor\n\nRecordTime(; mode::Symbol=:cumulative)\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordWhenActive","page":"Recording values","title":"Manopt.RecordWhenActive","text":"RecordWhenActive <: RecordAction\n\nrecord action that only records if the active boolean is set to true. This can be set from outside and is for example triggered by |RecordEvery](@ref) on recordings of the subsolver. While this is for subsolvers maybe not completely necessary, recording values that are never accessible, is not that useful.\n\nFields\n\nactive: a boolean that can (de-)activated from outside to turn on/off debug\nalways_update: whether or not to call the inner debugs with nonpositive iterates (init/reset)\n\nConstructor\n\nRecordWhenActive(r::RecordAction, active=true, always_update=true)\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Access-functions","page":"Recording values","title":"Access functions","text":"","category":"section"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"Modules = [Manopt]\nPages = [\"plans/record.jl\"]\nOrder = [:function]\nPublic = true\nPrivate = false","category":"page"},{"location":"plans/record/#Base.getindex-Tuple{RecordGroup, Vararg{Any}}","page":"Recording values","title":"Base.getindex","text":"getindex(r::RecordGroup, s::Symbol)\nr[s]\ngetindex(r::RecordGroup, sT::NTuple{N,Symbol})\nr[sT]\ngetindex(r::RecordGroup, i)\nr[i]\n\nreturn an array of recorded values with respect to the s, the symbols from the tuple sT or the index i. See get_record for details.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Base.getindex-Tuple{RecordSolverState, Symbol}","page":"Recording values","title":"Base.getindex","text":"get_index(rs::RecordSolverState, s::Symbol)\nro[s]\n\nGet the recorded values for recorded type s, see get_record for details.\n\nget_index(rs::RecordSolverState, s::Symbol, i...)\nro[s, i...]\n\nAccess the recording type of type s and call its RecordAction with [i...].\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.get_record","page":"Recording values","title":"Manopt.get_record","text":"get_record(s::AbstractManoptSolverState, [,symbol=:Iteration])\nget_record(s::RecordSolverState, [,symbol=:Iteration])\n\nreturn the recorded values from within the RecordSolverState s that where recorded with respect to the Symbol symbol as an Array. The default refers to any recordings during an :Iteration.\n\nWhen called with arbitrary AbstractManoptSolverState, this method looks for the RecordSolverState decorator and calls get_record on the decorator.\n\n\n\n\n\n","category":"function"},{"location":"plans/record/#Manopt.get_record-Tuple{RecordAction}","page":"Recording values","title":"Manopt.get_record","text":"get_record(r::RecordAction)\n\nreturn the recorded values stored within a RecordAction r.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.get_record-Tuple{RecordGroup}","page":"Recording values","title":"Manopt.get_record","text":"get_record(r::RecordGroup)\n\nreturn an array of tuples, where each tuple is a recorded set per iteration or record call.\n\nget_record(r::RecordGruop, k::Int)\n\nreturn an array of values corresponding to the ith entry in this record group\n\nget_record(r::RecordGruop, s::Symbol)\n\nreturn an array of recorded values with respect to the s, see RecordGroup.\n\nget_record(r::RecordGroup, s1::Symbol, s2::Symbol,...)\n\nreturn an array of tuples, where each tuple is a recorded set corresponding to the symbols s1, s2,... per iteration / record call.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.get_record_action","page":"Recording values","title":"Manopt.get_record_action","text":"get_record_action(s::AbstractManoptSolverState, s::Symbol)\n\nreturn the action contained in the (first) RecordSolverState decorator within the AbstractManoptSolverState o.\n\n\n\n\n\n","category":"function"},{"location":"plans/record/#Manopt.get_record_state-Tuple{AbstractManoptSolverState}","page":"Recording values","title":"Manopt.get_record_state","text":"get_record_state(s::AbstractManoptSolverState)\n\nreturn the RecordSolverState among the decorators from the AbstractManoptSolverState o\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.has_record-Tuple{RecordSolverState}","page":"Recording values","title":"Manopt.has_record","text":"has_record(s::AbstractManoptSolverState)\n\nIndicate whether the AbstractManoptSolverStates are decorated with RecordSolverState\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Internal-factory-functions","page":"Recording values","title":"Internal factory functions","text":"","category":"section"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"Modules = [Manopt]\nPages = [\"plans/record.jl\"]\nOrder = [:function]\nPublic = false\nPrivate = true","category":"page"},{"location":"plans/record/#Manopt.RecordActionFactory-Tuple{AbstractManoptSolverState, RecordAction}","page":"Recording values","title":"Manopt.RecordActionFactory","text":"RecordActionFactory(s::AbstractManoptSolverState, a)\n\ncreate a RecordAction where\n\na RecordAction is passed through\na [Symbol] creates\n:Change to record the change of the iterates, see RecordChange\n:Gradient to record the gradient, see RecordGradient\n:GradientNorm to record the norm of the gradient, see [RecordGradientNorm`](@ref)\n:Iterate to record the iterate\n:Iteration to record the current iteration number\nIterativeTime to record the time iteratively\n:Cost to record the current cost function value\n:Stepsize to record the current step size\n:Time to record the total time taken after every iteration\n:IterativeTime to record the times taken for each iteration.\n\nand every other symbol is passed to RecordEntry, which results in recording the field of the state with the symbol indicating the field of the solver to record.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.RecordActionFactory-Union{Tuple{T}, Tuple{AbstractManoptSolverState, Tuple{Symbol, T}}} where T","page":"Recording values","title":"Manopt.RecordActionFactory","text":"RecordActionFactory(s::AbstractManoptSolverState, t::Tuple{Symbol, T}) where {T}\n\ncreate a RecordAction where\n\n(:Subsolver, s) creates a RecordSubsolver with record= set to the second tuple entry\n\nFor other symbol the second entry is ignored and the symbol is used to generate a RecordEntry recording the field with the name symbol of s.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.RecordFactory-Tuple{AbstractManoptSolverState, Vector}","page":"Recording values","title":"Manopt.RecordFactory","text":"RecordFactory(s::AbstractManoptSolverState, a)\n\nGenerate a dictionary of RecordActions.\n\nFirst all Symbols String, RecordActions and numbers are collected, excluding :Stop and :WhenActive. This collected vector is added to the :Iteration => [...] pair. :Stop is added as :StoppingCriterion to the :Stop => [...] pair. If any of these two pairs does not exist, it is pairs are created when adding the corresponding symbols\n\nFor each Pair of a Symbol and a Vector, the RecordGroupFactory is called for the Vector and the result is added to the debug dictionary's entry with said symbol. This is wrapped into the RecordWhenActive, when the :WhenActive symbol is present\n\nReturn value\n\nA dictionary for the different entry points where debug can happen, each containing a RecordAction to call.\n\nNote that upon the initialisation all dictionaries but the :StartAlgorithm one are called with an i=0 for reset.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.RecordGroupFactory-Tuple{AbstractManoptSolverState, Vector}","page":"Recording values","title":"Manopt.RecordGroupFactory","text":"RecordGroupFactory(s::AbstractManoptSolverState, a)\n\nGenerate a [RecordGroup] of RecordActions. The following rules are used\n\nAny Symbol contained in a is passed to RecordActionFactory\nAny RecordAction is included as is.\n\nAny Pair of a RecordAction and a symbol, that is in order RecordCost() => :A is handled, that the corresponding record action can later be accessed as g[:A], where gis the record group generated here.\n\nIf this results in more than one RecordAction a RecordGroup of these is build.\n\nIf any integers are present, the last of these is used to wrap the group in a RecordEvery(k).\n\nIf :WhenActive is present, the resulting Action is wrapped in RecordWhenActive, making it deactivatable by its parent solver.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.record_or_reset!-Tuple{RecordAction, Any, Int64}","page":"Recording values","title":"Manopt.record_or_reset!","text":"record_or_reset!(r, v, k)\n\neither record (k>0 and not Inf) the value v within the RecordAction r or reset (k<0) the internal storage, where v has to match the internal value type of the corresponding RecordAction.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.set_parameter!-Tuple{RecordSolverState, Val{:Record}, Vararg{Any}}","page":"Recording values","title":"Manopt.set_parameter!","text":"set_parameter!(ams::RecordSolverState, ::Val{:Record}, args...)\n\nSet certain values specified by args... into the elements of the recordDictionary\n\n\n\n\n\n","category":"method"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"Further specific RecordActions can be found when specific types of AbstractManoptSolverState define them on their corresponding site.","category":"page"},{"location":"plans/record/#Technical-details","page":"Recording values","title":"Technical details","text":"","category":"section"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"initialize_solver!(amp::AbstractManoptProblem, rss::RecordSolverState)\nstep_solver!(p::AbstractManoptProblem, s::RecordSolverState, k)\nstop_solver!(p::AbstractManoptProblem, s::RecordSolverState, k)","category":"page"},{"location":"plans/record/#Manopt.initialize_solver!-Tuple{AbstractManoptProblem, RecordSolverState}","page":"Recording values","title":"Manopt.initialize_solver!","text":"initialize_solver!(ams::AbstractManoptProblem, rss::RecordSolverState)\n\nExtend the initialization of the solver by a hook to run records that were added to the :Start entry.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.step_solver!-Tuple{AbstractManoptProblem, RecordSolverState, Any}","page":"Recording values","title":"Manopt.step_solver!","text":"step_solver!(amp::AbstractManoptProblem, rss::RecordSolverState, k)\n\nExtend the ith step of the solver by a hook to run records, that were added to the :Iteration entry.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.stop_solver!-Tuple{AbstractManoptProblem, RecordSolverState, Any}","page":"Recording values","title":"Manopt.stop_solver!","text":"stop_solver!(amp::AbstractManoptProblem, rss::RecordSolverStatek k)\n\nExtend the call to the stopping criterion by a hook to run records, that were added to the :Stop entry.\n\n\n\n\n\n","category":"method"},{"location":"tutorials/Optimize/#Get-started:-optimize.","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This tutorial both introduces the basics of optimisation on manifolds as well as how to use Manopt.jl to perform optimisation on manifolds in Julia.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"For more theoretical background, see for example [Car92] for an introduction to Riemannian manifolds and [AMS08] or [Bou23] to read more about optimisation thereon.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Let mathcal M denote a (Riemannian manifold and let f mathcal M ℝ be a cost function. The aim is to determine or obtain a point p^* where f is minimal or in other words p^* is a minimizer of f.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This can also be written as","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":" operatorname*argmin_p mathcal M f(p)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"where the aim is to compute the minimizer p^* numerically. As an example, consider the generalisation of the (arithemtic) mean. In the Euclidean case with dmathbb N, that is for nmathbb N data points y_1ldotsy_n ℝ^d the mean","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":" frac1nsum_i=1^n y_i","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"can not be directly generalised to data q_1ldotsq_n mathcal M, since on a manifold there is no addition available. But the mean can also be characterised as the following minimizer","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":" operatorname*argmin_xℝ^d frac12nsum_i=1^n lVert x - y_irVert^2","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"and using the Riemannian distance d_mathcal M, this can be written on Riemannian manifolds, which is the so called Riemannian Center of Mass [Kar77]","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":" operatorname*argmin_pmathcal M\n frac12n sum_i=1^n d_mathcal M^2(p q_i)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Fortunately the gradient can be computed and is","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":" frac1n sum_i=1^n -log_p q_i","category":"page"},{"location":"tutorials/Optimize/#Loading-the-necessary-packages","page":"🏔️ Get started: optimize.","title":"Loading the necessary packages","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Let’s assume you have already installed both Manopt.jl and Manifolds.jl in Julia (using for example using Pkg; Pkg.add([\"Manopt\", \"Manifolds\"])). Then we can get started by loading both packages as well as Random.jl for persistency in this tutorial.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"using Manopt, Manifolds, Random, LinearAlgebra, ManifoldDiff\nusing ManifoldDiff: grad_distance, prox_distance\nRandom.seed!(42);","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Now assume we are on the Sphere mathcal M = mathbb S^2 and we generate some random points “around” some initial point p","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"n = 100\nσ = π / 8\nM = Sphere(2)\np = 1 / sqrt(2) * [1.0, 0.0, 1.0]\ndata = [exp(M, p, σ * rand(M; vector_at=p)) for i in 1:n];","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Now we can define the cost function f and its (Riemannian) gradient operatornamegrad f for the Riemannian center of mass:","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"f(M, p) = sum(1 / (2 * n) * distance.(Ref(M), Ref(p), data) .^ 2)\ngrad_f(M, p) = sum(1 / n * grad_distance.(Ref(M), data, Ref(p)));","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"and just call gradient_descent. For a first start, we do not have to provide more than the manifold, the cost, the gradient, and a starting point, which we just set to the first data point","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"m1 = gradient_descent(M, f, grad_f, data[1])","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"3-element Vector{Float64}:\n 0.6868392807355564\n 0.006531599748261925\n 0.7267799809043942","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"In order to get more details, we further add the debug= keyword argument, which act as a decorator pattern.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This way we can easily specify a certain debug to be printed. The goal is to get an output of the form","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"# i | Last Change: [...] | F(x): [...] |","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"but where we also want to fix the display format for the change and the cost numbers (the [...]) to have a certain format. Furthermore, the reason why the solver stopped should be printed at the end","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"These can easily be specified using either a Symbol when using the default format for numbers, or a tuple of a symbol and a format-string in the debug= keyword that is available for every solver. We can also, for illustration reasons, just look at the first 6 steps by setting a stopping_criterion=","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"m2 = gradient_descent(M, f, grad_f, data[1];\n debug=[:Iteration,(:Change, \"|Δp|: %1.9f |\"),\n (:Cost, \" F(x): %1.11f | \"), \"\\n\", :Stop],\n stopping_criterion = StopAfterIteration(6)\n )","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Initial F(x): 0.32487988924 | \n# 1 |Δp|: 1.063609017 | F(x): 0.25232524046 | \n# 2 |Δp|: 0.809858671 | F(x): 0.20966960102 | \n# 3 |Δp|: 0.616665145 | F(x): 0.18546505598 | \n# 4 |Δp|: 0.470841764 | F(x): 0.17121604104 | \n# 5 |Δp|: 0.359345690 | F(x): 0.16300825911 | \n# 6 |Δp|: 0.274597420 | F(x): 0.15818548927 | \nThe algorithm reached its maximal number of iterations (6).\n\n3-element Vector{Float64}:\n 0.7533872481682505\n -0.06053107055583637\n 0.6547851890466334","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"See here for the list of available symbols.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"info: Technical Detail\nThe debug= keyword is actually a list of DebugActions added to every iteration, allowing you to write your own ones even. Additionally, :Stop is an action added to the end of the solver to display the reason why the solver stopped.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"The default stopping criterion for gradient_descent is, to either stop when the gradient is small (<1e-9) or a max number of iterations is reached (as a fallback). Combining stopping-criteria can be done by | or &. We further pass a number 25 to debug= to only an output every 25th iteration:","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"m3 = gradient_descent(M, f, grad_f, data[1];\n debug=[:Iteration,(:Change, \"|Δp|: %1.9f |\"),\n (:Cost, \" F(x): %1.11f | \"), \"\\n\", :Stop, 25],\n stopping_criterion = StopWhenGradientNormLess(1e-14) | StopAfterIteration(400),\n)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Initial F(x): 0.32487988924 | \n# 25 |Δp|: 0.459715605 | F(x): 0.15145076374 | \n# 50 |Δp|: 0.000551270 | F(x): 0.15145051509 | \nThe algorithm reached approximately critical point after 73 iterations; the gradient norm (9.988871119384563e-16) is less than 1.0e-14.\n\n3-element Vector{Float64}:\n 0.6868392794788668\n 0.006531600680779286\n 0.7267799820836411","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"We can finally use another way to determine the stepsize, for example a little more expensive ArmijoLineSeach than the default stepsize rule used on the Sphere.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"m4 = gradient_descent(M, f, grad_f, data[1];\n debug=[:Iteration,(:Change, \"|Δp|: %1.9f |\"),\n (:Cost, \" F(x): %1.11f | \"), \"\\n\", :Stop, 2],\n stepsize = ArmijoLinesearch(; contraction_factor=0.999, sufficient_decrease=0.5),\n stopping_criterion = StopWhenGradientNormLess(1e-14) | StopAfterIteration(400),\n)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Initial F(x): 0.32487988924 | \n# 2 |Δp|: 0.001318138 | F(x): 0.15145051509 | \n# 4 |Δp|: 0.000000004 | F(x): 0.15145051509 | \n# 6 |Δp|: 0.000000000 | F(x): 0.15145051509 | \nThe algorithm reached approximately critical point after 7 iterations; the gradient norm (5.073696618059386e-15) is less than 1.0e-14.\n\n3-element Vector{Float64}:\n 0.6868392794788669\n 0.006531600680779358\n 0.7267799820836413","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Then we reach approximately the same point as in the previous run, but in far less steps","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"[f(M, m3)-f(M,m4), distance(M, m3, m4)]","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"2-element Vector{Float64}:\n 1.6653345369377348e-16\n 1.727269835930624e-16","category":"page"},{"location":"tutorials/Optimize/#Using-the-tutorial-mode","page":"🏔️ Get started: optimize.","title":"Using the tutorial mode","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Since a few things on manifolds are a bit different from (classical) Euclidean optimization, Manopt.jl has a mode to warn about a few pitfalls.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"It can be set using","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Manopt.set_parameter!(:Mode, \"Tutorial\")","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"[ Info: Setting the `Manopt.jl` parameter :Mode to Tutorial.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"to activate these. Continuing from the example before, one might argue, that the minimizer of f does not depend on the scaling of the function. In theory this is of course also the case on manifolds, but for the optimizations there is a caveat. When we define the Riemannian mean without the scaling","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"f2(M, p) = sum(1 / 2 * distance.(Ref(M), Ref(p), data) .^ 2)\ngrad_f2(M, p) = sum(grad_distance.(Ref(M), data, Ref(p)));","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"And we consider the gradient at the starting point in norm","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"norm(M, data[1], grad_f2(M, data[1]))","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"57.47318616893399","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"On the sphere, when we follow a geodesic, we “return” to the start point after length 2π. If we “land” short before the starting point due to a gradient of length just shy of 2π, the line search would take the gradient direction (and not the negative gradient direction) as a start. The line search is still performed, but in this case returns a much too small, maybe even nearly zero step size.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"In other words, we have to be careful that the optimisation stays a “local” argument we use.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This is also warned for in \"Tutorial\" mode. Calling","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"mX = gradient_descent(M, f2, grad_f2, data[1])","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"┌ Warning: At iteration #0\n│ the gradient norm (57.47318616893399) is larger that 1.0 times the injectivity radius 3.141592653589793 at the current iterate.\n└ @ Manopt ~/work/Manopt.jl/Manopt.jl/src/plans/debug.jl:1120\n┌ Warning: Further warnings will be suppressed, use DebugWarnIfGradientNormTooLarge(1.0, :Always) to get all warnings.\n└ @ Manopt ~/work/Manopt.jl/Manopt.jl/src/plans/debug.jl:1124\n\n3-element Vector{Float64}:\n 0.6868392794870684\n 0.006531600674920825\n 0.7267799820759485","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"So just by chance it seems we still got nearly the same point as before, but when we for example look when this one stops, that is takes more steps.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"gradient_descent(M, f2, grad_f2, data[1], debug=[:Stop]);","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"The algorithm reached approximately critical point after 140 iterations; the gradient norm (6.807380063106406e-9) is less than 1.0e-8.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This also illustrates one way to deactivate the hints, namely by overwriting the debug= keyword, that in Tutorial mode contains additional warnings. The other option is to globally reset the :Mode back to","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Manopt.set_parameter!(:Mode, \"\")","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"[ Info: Resetting the `Manopt.jl` parameter :Mode to default.","category":"page"},{"location":"tutorials/Optimize/#Example-2:-computing-the-median-of-symmetric-positive-definite-matrices","page":"🏔️ Get started: optimize.","title":"Example 2: computing the median of symmetric positive definite matrices","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"For the second example let’s consider the manifold of 3 3 symmetric positive definite matrices and again 100 random points","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"N = SymmetricPositiveDefinite(3)\nm = 100\nσ = 0.005\nq = Matrix{Float64}(I, 3, 3)\ndata2 = [exp(N, q, σ * rand(N; vector_at=q)) for i in 1:m];","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Instead of the mean, let’s consider a non-smooth optimisation task: the median can be generalized to Manifolds as the minimiser of the sum of distances, see [Bac14]. We define","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"g(N, q) = sum(1 / (2 * m) * distance.(Ref(N), Ref(q), data2))","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"g (generic function with 1 method)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Since the function is non-smooth, we can not use a gradient-based approach. But since for every summand the proximal map is available, we can use the cyclic proximal point algorithm (CPPA). We hence define the vector of proximal maps as","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"proxes_g = Function[(N, λ, q) -> prox_distance(N, λ / m, di, q, 1) for di in data2];","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Besides also looking at a some debug prints, we can also easily record these values. Similarly to debug=, record= also accepts Symbols, see list here, to indicate things to record. We further set return_state to true to obtain not just the (approximate) minimizer.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"res = cyclic_proximal_point(N, g, proxes_g, data2[1];\n debug=[:Iteration,\" | \",:Change,\" | \",(:Cost, \"F(x): %1.12f\"),\"\\n\", 1000, :Stop,\n ],\n record=[:Iteration, :Change, :Cost, :Iterate],\n return_state=true,\n );","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Initial | | F(x): 0.005875512856\n# 1000 | Last Change: 0.003704 | F(x): 0.003239019699\n# 2000 | Last Change: 0.000015 | F(x): 0.003238996105\n# 3000 | Last Change: 0.000005 | F(x): 0.003238991748\n# 4000 | Last Change: 0.000002 | F(x): 0.003238990225\n# 5000 | Last Change: 0.000001 | F(x): 0.003238989520\nThe algorithm reached its maximal number of iterations (5000).","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"note: Technical Detail\nThe recording is realised by RecordActions that are (also) executed at every iteration. These can also be individually implemented and added to the record= array instead of symbols.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"First, the computed median can be accessed as","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"median = get_solver_result(res)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"3×3 Matrix{Float64}:\n 1.0 2.12236e-5 0.000398721\n 2.12236e-5 1.00044 0.000141798\n 0.000398721 0.000141798 1.00041","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"but we can also look at the recorded values. For simplicity (of output), lets just look at the recorded values at iteration 42","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"get_record(res)[42]","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"(42, 1.0569455860769079e-5, 0.003252547739370045, [0.9998583866917449 0.0002098880312604301 0.0002895445818451581; 0.00020988803126037459 1.0000931572564762 0.0002084371501681892; 0.00028954458184524134 0.0002084371501681892 1.000070920743257])","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"But we can also access whole series and see that the cost does not decrease that fast; actually, the CPPA might converge relatively slow. For that we can for example access the :Cost that was recorded every :Iterate as well as the (maybe a little boring) :Iteration-number in a semi-log-plot.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"x = get_record(res, :Iteration, :Iteration)\ny = get_record(res, :Iteration, :Cost)\nusing Plots\nplot(x,y,xaxis=:log, label=\"CPPA Cost\")","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"(Image: )","category":"page"},{"location":"tutorials/Optimize/#Technical-details","page":"🏔️ Get started: optimize.","title":"Technical details","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `..`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"2024-11-21T20:39:21.794","category":"page"},{"location":"tutorials/Optimize/#Literature","page":"🏔️ Get started: optimize.","title":"Literature","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"P.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.\n\n\n\nM. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.\n\n\n\nN. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\nM. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.\n\n\n\nH. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).\n\n\n\n","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Adaptive-regularization-with-cubics","page":"Adaptive Regularization with Cubics","title":"Adaptive regularization with cubics","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"adaptive_regularization_with_cubics\nadaptive_regularization_with_cubics!","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.adaptive_regularization_with_cubics","page":"Adaptive Regularization with Cubics","title":"Manopt.adaptive_regularization_with_cubics","text":"adaptive_regularization_with_cubics(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics(M, f, grad_f, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics(M, mho, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics!(M, f, grad_f, Hess_f, p; kwargs...)\nadaptive_regularization_with_cubics!(M, f, grad_f, p; kwargs...)\nadaptive_regularization_with_cubics!(M, mho, p; kwargs...)\n\nSolve an optimization problem on the manifold M by iteratively minimizing\n\nm_k(X) = f(p_k) + X operatornamegrad f(p^(k)) + frac12X operatornameHess f(p^(k))X + fracσ_k3lVert X rVert^3\n\non the tangent space at the current iterate p_k, where X T_p_kmathcal M and σ_k 0 is a regularization parameter.\n\nLet Xp^(k) denote the minimizer of the model m_k and use the model improvement\n\n ρ_k = fracf(p_k) - f(operatornameretr_p_k(X_k))m_k(0) - m_k(X_k) + fracσ_k3lVert X_krVert^3\n\nWith two thresholds η_2 η_1 0 set p_k+1 = operatornameretr_p_k(X_k) if ρ η_1 and reject the candidate otherwise, that is, set p_k+1 = p_k.\n\nFurther update the regularization parameter using factors 0 γ_1 1 γ_2 reads\n\nσ_k+1 =\nbegincases\n maxσ_min γ_1σ_k text if ρ geq η_2 text (the model was very successful)\n σ_k text if ρ η_1 η_2)text (the model was successful)\n γ_2σ_k text if ρ η_1text (the model was unsuccessful)\nendcases\n\nFor more details see [ABBC20].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nthe cost f and its gradient and Hessian might also be provided as a ManifoldHessianObjective\n\nKeyword arguments\n\nσ=100.0 / sqrt(manifold_dimension(M): initial regularization parameter\nσmin=1e-10: minimal regularization value σ_min\nη1=0.1: lower model success threshold\nη2=0.9: upper model success threshold\nγ1=0.1: regularization reduction factor (for the success case)\nγ2=2.0: regularization increment factor (for the non-success case)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninitial_tangent_vector=zero_vector(M, p): initialize any tangent vector data,\nmaxIterLanczos=200: a shortcut to set the stopping criterion in the sub solver,\nρ_regularization=1e3: a regularization to avoid dividing by zero for small values of cost and model\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions:\nstopping_criterion=StopAfterIteration(40)|StopWhenGradientNormLess(1e-9)|StopWhenAllLanczosVectorsUsed(maxIterLanczos): a functor indicating that the stopping criterion is fulfilled\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective=nothing: a shortcut to modify the objective of the subproblem used within in the sub_problem= keyword By default, this is initialized as a AdaptiveRagularizationWithCubicsModelObjective, which can further be decorated by using the sub_kwargs= keyword.\nsub_state=LanczosState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nIf you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.adaptive_regularization_with_cubics!","page":"Adaptive Regularization with Cubics","title":"Manopt.adaptive_regularization_with_cubics!","text":"adaptive_regularization_with_cubics(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics(M, f, grad_f, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics(M, mho, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics!(M, f, grad_f, Hess_f, p; kwargs...)\nadaptive_regularization_with_cubics!(M, f, grad_f, p; kwargs...)\nadaptive_regularization_with_cubics!(M, mho, p; kwargs...)\n\nSolve an optimization problem on the manifold M by iteratively minimizing\n\nm_k(X) = f(p_k) + X operatornamegrad f(p^(k)) + frac12X operatornameHess f(p^(k))X + fracσ_k3lVert X rVert^3\n\non the tangent space at the current iterate p_k, where X T_p_kmathcal M and σ_k 0 is a regularization parameter.\n\nLet Xp^(k) denote the minimizer of the model m_k and use the model improvement\n\n ρ_k = fracf(p_k) - f(operatornameretr_p_k(X_k))m_k(0) - m_k(X_k) + fracσ_k3lVert X_krVert^3\n\nWith two thresholds η_2 η_1 0 set p_k+1 = operatornameretr_p_k(X_k) if ρ η_1 and reject the candidate otherwise, that is, set p_k+1 = p_k.\n\nFurther update the regularization parameter using factors 0 γ_1 1 γ_2 reads\n\nσ_k+1 =\nbegincases\n maxσ_min γ_1σ_k text if ρ geq η_2 text (the model was very successful)\n σ_k text if ρ η_1 η_2)text (the model was successful)\n γ_2σ_k text if ρ η_1text (the model was unsuccessful)\nendcases\n\nFor more details see [ABBC20].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nthe cost f and its gradient and Hessian might also be provided as a ManifoldHessianObjective\n\nKeyword arguments\n\nσ=100.0 / sqrt(manifold_dimension(M): initial regularization parameter\nσmin=1e-10: minimal regularization value σ_min\nη1=0.1: lower model success threshold\nη2=0.9: upper model success threshold\nγ1=0.1: regularization reduction factor (for the success case)\nγ2=2.0: regularization increment factor (for the non-success case)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninitial_tangent_vector=zero_vector(M, p): initialize any tangent vector data,\nmaxIterLanczos=200: a shortcut to set the stopping criterion in the sub solver,\nρ_regularization=1e3: a regularization to avoid dividing by zero for small values of cost and model\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions:\nstopping_criterion=StopAfterIteration(40)|StopWhenGradientNormLess(1e-9)|StopWhenAllLanczosVectorsUsed(maxIterLanczos): a functor indicating that the stopping criterion is fulfilled\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective=nothing: a shortcut to modify the objective of the subproblem used within in the sub_problem= keyword By default, this is initialized as a AdaptiveRagularizationWithCubicsModelObjective, which can further be decorated by using the sub_kwargs= keyword.\nsub_state=LanczosState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nIf you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/adaptive-regularization-with-cubics/#State","page":"Adaptive Regularization with Cubics","title":"State","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"AdaptiveRegularizationState","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.AdaptiveRegularizationState","page":"Adaptive Regularization with Cubics","title":"Manopt.AdaptiveRegularizationState","text":"AdaptiveRegularizationState{P,T} <: AbstractHessianSolverState\n\nA state for the adaptive_regularization_with_cubics solver.\n\nFields\n\nη1, η1: bounds for evaluating the regularization parameter\nγ1, γ2: shrinking and expansion factors for regularization parameter σ\nH: the current Hessian evaluation\ns: the current solution from the subsolver\np::P: a point on the manifold mathcal Mstoring the current iterate\nq: a point for the candidates to evaluate model and ρ\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\ns: the tangent vector step resulting from minimizing the model problem in the tangent space T_pmathcal M\nσ: the current cubic regularization parameter\nσmin: lower bound for the cubic regularization parameter\nρ_regularization: regularization parameter for computing ρ. When approaching convergence ρ may be difficult to compute with numerator and denominator approaching zero. Regularizing the ratio lets ρ go to 1 near convergence.\nρ: the current regularized ratio of actual improvement and model improvement.\nρ_denominator: a value to store the denominator from the computation of ρ to allow for a warning or error when this value is non-positive.\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nFurthermore the following integral fields are defined\n\nConstructor\n\nAdaptiveRegularizationState(M, sub_problem, sub_state; kwargs...)\n\nConstruct the solver state with all fields stated as keyword arguments and the following defaults\n\nKeyword arguments\n\nη1=0.1\nη2=0.9\nγ1=0.1\nγ2=2.0\nσ=100/manifold_dimension(M)\n`σmin=1e-7\nρ_regularization=1e3\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\np=rand(M): a point on the manifold mathcal M\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(100): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\n\n\n\n\n","category":"type"},{"location":"solvers/adaptive-regularization-with-cubics/#Sub-solvers","page":"Adaptive Regularization with Cubics","title":"Sub solvers","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"There are several ways to approach the subsolver. The default is the first one.","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#arc-Lanczos","page":"Adaptive Regularization with Cubics","title":"Lanczos iteration","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"Manopt.LanczosState","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.LanczosState","page":"Adaptive Regularization with Cubics","title":"Manopt.LanczosState","text":"LanczosState{P,T,SC,B,I,R,TM,V,Y} <: AbstractManoptSolverState\n\nSolve the adaptive regularized subproblem with a Lanczos iteration\n\nFields\n\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nstop_newton::StoppingCriterion: a functor indicating that the stopping criterion is fulfilledused for the inner Newton iteration\nσ: the current regularization parameter\nX: the Iterate\nLanczos_vectors: the obtained Lanczos vectors\ntridig_matrix: the tridiagonal coefficient matrix T\ncoefficients: the coefficients y_1y_k that determine the solution\nHp: a temporary tangent vector containing the evaluation of the Hessian\nHp_residual: a temporary tangent vector containing the residual to the Hessian\nS: the current obtained / approximated solution\n\nConstructor\n\nLanczosState(TpM::TangentSpace; kwargs...)\n\nKeyword arguments\n\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mas the iterate\nmaxIterLanzcos=200: shortcut to set the maximal number of iterations in the stopping_crtierion=\nθ=0.5: set the parameter in the StopWhenFirstOrderProgress within the default stopping_criterion=.\nstopping_criterion=StopAfterIteration(maxIterLanczos)|StopWhenFirstOrderProgress(θ): a functor indicating that the stopping criterion is fulfilled\nstopping_criterion_newton=StopAfterIteration(200): a functor indicating that the stopping criterion is fulfilled used for the inner Newton iteration\nσ=10.0: specify the regularization parameter\n\n\n\n\n\n","category":"type"},{"location":"solvers/adaptive-regularization-with-cubics/#(Conjugate)-gradient-descent","page":"Adaptive Regularization with Cubics","title":"(Conjugate) gradient descent","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"There is a generic objective, that implements the sub problem","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"AdaptiveRagularizationWithCubicsModelObjective","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.AdaptiveRagularizationWithCubicsModelObjective","page":"Adaptive Regularization with Cubics","title":"Manopt.AdaptiveRagularizationWithCubicsModelObjective","text":"AdaptiveRagularizationWithCubicsModelObjective\n\nA model for the adaptive regularization with Cubics\n\nm(X) = f(p) + operatornamegrad f(p) X _p + frac12 operatornameHess f(p)X X_p\n + fracσ3 lVert X rVert^3\n\ncf. Eq. (33) in [ABBC20]\n\nFields\n\nobjective: an AbstractManifoldHessianObjective proving f, its gradient and Hessian\nσ: the current (cubic) regularization parameter\n\nConstructors\n\nAdaptiveRagularizationWithCubicsModelObjective(mho, σ=1.0)\n\nwith either an AbstractManifoldHessianObjective objective or an decorator containing such an objective.\n\n\n\n\n\n","category":"type"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"Since the sub problem is given on the tangent space, you have to provide","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"arc_obj = AdaptiveRagularizationWithCubicsModelObjective(mho, σ)\nsub_problem = DefaultProblem(TangentSpaceAt(M,p), arc_obj)","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"where mho is the Hessian objective of f to solve. Then use this for the sub_problem keyword and use your favourite gradient based solver for the sub_state keyword, for example a ConjugateGradientDescentState","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Additional-stopping-criteria","page":"Adaptive Regularization with Cubics","title":"Additional stopping criteria","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"StopWhenAllLanczosVectorsUsed\nStopWhenFirstOrderProgress","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.StopWhenAllLanczosVectorsUsed","page":"Adaptive Regularization with Cubics","title":"Manopt.StopWhenAllLanczosVectorsUsed","text":"StopWhenAllLanczosVectorsUsed <: StoppingCriterion\n\nWhen an inner iteration has used up all Lanczos vectors, then this stopping criterion is a fallback / security stopping criterion to not access a non-existing field in the array allocated for vectors.\n\nNote that this stopping criterion (for now) is only implemented for the case that an AdaptiveRegularizationState when using a LanczosState subsolver\n\nFields\n\nmaxLanczosVectors: maximal number of Lanczos vectors\nat_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.\n\nConstructor\n\nStopWhenAllLanczosVectorsUsed(maxLancosVectors::Int)\n\n\n\n\n\n","category":"type"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.StopWhenFirstOrderProgress","page":"Adaptive Regularization with Cubics","title":"Manopt.StopWhenFirstOrderProgress","text":"StopWhenFirstOrderProgress <: StoppingCriterion\n\nA stopping criterion related to the Riemannian adaptive regularization with cubics (ARC) solver indicating that the model function at the current (outer) iterate,\n\nm_k(X) = f(p_k) + X operatornamegrad f(p^(k)) + frac12X operatornameHess f(p^(k))X + fracσ_k3lVert X rVert^3\n\ndefined on the tangent space T_pmathcal M fulfills at the current iterate X_k that\n\nm(X_k) leq m(0)\nquadtext and quad\nlVert operatornamegrad m(X_k) rVert θ lVert X_k rVert^2\n\nFields\n\nθ: the factor θ in the second condition\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\n\nConstructor\n\nStopWhenAllLanczosVectorsUsed(θ)\n\n\n\n\n\n","category":"type"},{"location":"solvers/adaptive-regularization-with-cubics/#sec-arc-technical-details","page":"Adaptive Regularization with Cubics","title":"Technical details","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"The adaptive_regularization_with_cubics requires the following functions of a manifolds to be available","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nif you do not provide an initial regularization parameter σ, a manifold_dimension is required.\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).\ninner(M, p, X, Y) is used within the algorithm step","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"Furthermore, within the Lanczos subsolver, generating a random vector (at p) using rand!(M, X; vector_at=p) in place of X is required","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Literature","page":"Adaptive Regularization with Cubics","title":"Literature","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"N. Agarwal, N. Boumal, B. Bullins and C. Cartis. Adaptive regularization with cubics on manifolds. Mathematical Programming (2020).\n\n\n\n","category":"page"},{"location":"solvers/trust_regions/#The-Riemannian-trust-regions-solver","page":"Trust-Regions Solver","title":"The Riemannian trust regions solver","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"Minimize a function","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"operatorname*argmin_p mathcalM f(p)","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"by using the Riemannian trust-regions solver following [ABG06] a model is build by lifting the objective at the kth iterate p_k by locally mapping the cost function f to the tangent space as f_k T_p_kmathcal M ℝ as f_k(X) = f(operatornameretr_p_k(X)). The trust region subproblem is then defined as","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"operatorname*argmin_X T_p_kmathcal M m_k(X)","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"where","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"beginalign*\nm_k T_p_Kmathcal M ℝ\nm_k(X) = f(p_k) + operatornamegrad f(p_k) X_p_k + frac12langle mathcal H_k(X)X_p_k\ntextsuch that lVert X rVert_p_k Δ_k\nendalign*","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"Here Δ_k is a trust region radius, that is adapted every iteration, and mathcal H_k is some symmetric linear operator that approximates the Hessian operatornameHess f of f.","category":"page"},{"location":"solvers/trust_regions/#Interface","page":"Trust-Regions Solver","title":"Interface","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"trust_regions\ntrust_regions!","category":"page"},{"location":"solvers/trust_regions/#Manopt.trust_regions","page":"Trust-Regions Solver","title":"Manopt.trust_regions","text":"trust_regions(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\ntrust_regions(M, f, grad_f, p=rand(M); kwargs...)\ntrust_regions!(M, f, grad_f, Hess_f, p; kwargs...)\ntrust_regions!(M, f, grad_f, p; kwargs...)\n\nrun the Riemannian trust-regions solver for optimization on manifolds to minimize f, see on [ABG06, CGT00].\n\nFor the case that no Hessian is provided, the Hessian is computed using finite differences, see ApproxHessianFiniteDifference. For solving the inner trust-region subproblem of finding an update-vector, by default the truncated_conjugate_gradient_descent is used.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nacceptance_rate: accept/reject threshold: if ρ (the performance ratio for the iterate) is at least the acceptance rate ρ', the candidate is accepted. This value should be between 0 and rac14\naugmentation_threshold=0.75: trust-region augmentation threshold: if ρ is larger than this threshold, a solution is on the trust region boundary and negative curvature, and the radius is extended (augmented)\naugmentation_factor=2.0: trust-region augmentation factor\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nκ=0.1: the linear convergence target rate of the tCG method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein\nmax_trust_region_radius: the maximum trust-region radius\npreconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nrandomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.\nρ_regularization=1e3: regularize the performance evaluation ρ to avoid numerical inaccuracies.\nreduction_factor=0.25: trust-region reduction factor\nreduction_threshold=0.1: trust-region reduction threshold: if ρ is below this threshold, the trust region radius is reduced by reduction_factor.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_stopping_criterion=( see truncated_conjugate_gradient_descent): a functor indicating that the stopping criterion is fulfilled\nsub_problem=DefaultManoptProblem(M,ConstrainedManifoldObjective(subcost, subgrad; evaluation=evaluation)): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used\nθ=1.0: the superlinear convergence target rate of 1+θ of the tCG-method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein\ntrust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius\n\nFor the case that no Hessian is provided, the Hessian is computed using finite difference, see ApproxHessianFiniteDifference.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\nSee also\n\ntruncated_conjugate_gradient_descent\n\n\n\n\n\n","category":"function"},{"location":"solvers/trust_regions/#Manopt.trust_regions!","page":"Trust-Regions Solver","title":"Manopt.trust_regions!","text":"trust_regions(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\ntrust_regions(M, f, grad_f, p=rand(M); kwargs...)\ntrust_regions!(M, f, grad_f, Hess_f, p; kwargs...)\ntrust_regions!(M, f, grad_f, p; kwargs...)\n\nrun the Riemannian trust-regions solver for optimization on manifolds to minimize f, see on [ABG06, CGT00].\n\nFor the case that no Hessian is provided, the Hessian is computed using finite differences, see ApproxHessianFiniteDifference. For solving the inner trust-region subproblem of finding an update-vector, by default the truncated_conjugate_gradient_descent is used.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nacceptance_rate: accept/reject threshold: if ρ (the performance ratio for the iterate) is at least the acceptance rate ρ', the candidate is accepted. This value should be between 0 and rac14\naugmentation_threshold=0.75: trust-region augmentation threshold: if ρ is larger than this threshold, a solution is on the trust region boundary and negative curvature, and the radius is extended (augmented)\naugmentation_factor=2.0: trust-region augmentation factor\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nκ=0.1: the linear convergence target rate of the tCG method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein\nmax_trust_region_radius: the maximum trust-region radius\npreconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nrandomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.\nρ_regularization=1e3: regularize the performance evaluation ρ to avoid numerical inaccuracies.\nreduction_factor=0.25: trust-region reduction factor\nreduction_threshold=0.1: trust-region reduction threshold: if ρ is below this threshold, the trust region radius is reduced by reduction_factor.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_stopping_criterion=( see truncated_conjugate_gradient_descent): a functor indicating that the stopping criterion is fulfilled\nsub_problem=DefaultManoptProblem(M,ConstrainedManifoldObjective(subcost, subgrad; evaluation=evaluation)): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used\nθ=1.0: the superlinear convergence target rate of 1+θ of the tCG-method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein\ntrust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius\n\nFor the case that no Hessian is provided, the Hessian is computed using finite difference, see ApproxHessianFiniteDifference.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\nSee also\n\ntruncated_conjugate_gradient_descent\n\n\n\n\n\n","category":"function"},{"location":"solvers/trust_regions/#State","page":"Trust-Regions Solver","title":"State","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"TrustRegionsState","category":"page"},{"location":"solvers/trust_regions/#Manopt.TrustRegionsState","page":"Trust-Regions Solver","title":"Manopt.TrustRegionsState","text":"TrustRegionsState <: AbstractHessianSolverState\n\nStore the state of the trust-regions solver.\n\nFields\n\nacceptance_rate: a lower bound of the performance ratio for the iterate that decides if the iteration is accepted or not.\nHX, HY, HZ: interim storage (to avoid allocation) of `\\operatorname{Hess} f(p)[⋅] of X, Y, Z\nmax_trust_region_radius: the maximum trust-region radius\np::P: a point on the manifold mathcal Mstoring the current iterate\nproject!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nrandomize: indicate whether X is initialised to a random vector or not\nρ_regularization: regularize the model fitness ρ to avoid division by zero\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nσ: Gaussian standard deviation when creating the random initial tangent vector This field has no effect, when randomize is false.\ntrust_region_radius: the trust-region radius\nX::T: a tangent vector at the point p on the manifold mathcal M\nY: the solution (tangent vector) of the subsolver\nZ: the Cauchy point (only used if random is activated)\n\nConstructors\n\nTrustRegionsState(M, mho::AbstractManifoldHessianObjective; kwargs...)\nTrustRegionsState(M, sub_problem, sub_state; kwargs...)\nTrustRegionsState(M, sub_problem; evaluation=AllocatingEvaluation(), kwargs...)\n\ncreate a trust region state.\n\ngiven a AbstractManifoldHessianObjective mho, the default sub solver, a TruncatedConjugateGradientState with mho used to define the problem on a tangent space is created\ngiven a sub_problem and an evaluation= keyword, the sub problem solver is assumed to be the closed form solution, where evaluation determines how to call the sub function.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nsub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nKeyword arguments\n\nacceptance_rate=0.1\nmax_trust_region_radius=sqrt(manifold_dimension(M))\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nproject!=copyto!\nstopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nrandomize=false\nρ_regularization=10000.0\nθ=1.0\ntrust_region_radius=max_trust_region_radius / 8\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nSee also\n\ntrust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/trust_regions/#Approximation-of-the-Hessian","page":"Trust-Regions Solver","title":"Approximation of the Hessian","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"Several different methods to approximate the Hessian are available.","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"ApproxHessianFiniteDifference\nApproxHessianSymmetricRankOne\nApproxHessianBFGS","category":"page"},{"location":"solvers/trust_regions/#Manopt.ApproxHessianFiniteDifference","page":"Trust-Regions Solver","title":"Manopt.ApproxHessianFiniteDifference","text":"ApproxHessianFiniteDifference{E, P, T, G, RTR, VTR, R <: Real} <: AbstractApproxHessian\n\nA functor to approximate the Hessian by a finite difference of gradient evaluation.\n\nGiven a point p and a direction X and the gradient operatornamegrad f(p) of a function f the Hessian is approximated as follows: let c be a stepsize, X T_pmathcal M a tangent vector and q = operatornameretr_p(fracclVert X rVert_pX) be a step in direction X of length c following a retraction Then the Hessian is approximated by the finite difference of the gradients, where mathcal T_ is a vector transport.\n\noperatornameHessf(p)X \nfraclVert X rVert_pcBigl(\n mathcal T_pgets qbigr(operatornamegradf(q)bigl) - operatornamegradf(p)\nBigl)\n\nFields\n\ngradient!!: the gradient function (either allocating or mutating, see evaluation parameter)\nstep_length: a step length for the finite difference\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nInternal temporary fields\n\ngrad_tmp: a temporary storage for the gradient at the current p\ngrad_dir_tmp: a temporary storage for the gradient at the current p_dir\np_dir::P: a temporary storage to the forward direction (or the q in the formula)\n\nConstructor\n\nApproximateFiniteDifference(M, p, grad_f; kwargs...)\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nsteplength=2^{-14} step lengthc`` to approximate the gradient evaluations\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/trust_regions/#Manopt.ApproxHessianSymmetricRankOne","page":"Trust-Regions Solver","title":"Manopt.ApproxHessianSymmetricRankOne","text":"ApproxHessianSymmetricRankOne{E, P, G, T, B<:AbstractBasis{ℝ}, VTR, R<:Real} <: AbstractApproxHessian\n\nA functor to approximate the Hessian by the symmetric rank one update.\n\nFields\n\ngradient!!: the gradient function (either allocating or mutating, see evaluation parameter).\nν: a small real number to ensure that the denominator in the update does not become too small and thus the method does not break down.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports.\n\nInternal temporary fields\n\np_tmp: a temporary storage the current point p.\ngrad_tmp: a temporary storage for the gradient at the current p.\nmatrix: a temporary storage for the matrix representation of the approximating operator.\nbasis: a temporary storage for an orthonormal basis at the current p.\n\nConstructor\n\nApproxHessianSymmetricRankOne(M, p, gradF; kwargs...)\n\nKeyword arguments\n\ninitial_operator (Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M))) the matrix representation of the initial approximating operator.\nbasis (DefaultOrthonormalBasis()) an orthonormal basis in the tangent space of the initial iterate p.\nnu (-1)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/trust_regions/#Manopt.ApproxHessianBFGS","page":"Trust-Regions Solver","title":"Manopt.ApproxHessianBFGS","text":"ApproxHessianBFGS{E, P, G, T, B<:AbstractBasis{ℝ}, VTR, R<:Real} <: AbstractApproxHessian\n\nA functor to approximate the Hessian by the BFGS update.\n\nFields\n\ngradient!! the gradient function (either allocating or mutating, see evaluation parameter).\nscale\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nInternal temporary fields\n\np_tmp a temporary storage the current point p.\ngrad_tmp a temporary storage for the gradient at the current p.\nmatrix a temporary storage for the matrix representation of the approximating operator.\nbasis a temporary storage for an orthonormal basis at the current p.\n\nConstructor\n\nApproxHessianBFGS(M, p, gradF; kwargs...)\n\nKeyword arguments\n\ninitial_operator (Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M))) the matrix representation of the initial approximating operator.\nbasis (DefaultOrthonormalBasis()) an orthonormal basis in the tangent space of the initial iterate p.\nnu (-1)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"as well as their (non-exported) common supertype","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"Manopt.AbstractApproxHessian","category":"page"},{"location":"solvers/trust_regions/#Manopt.AbstractApproxHessian","page":"Trust-Regions Solver","title":"Manopt.AbstractApproxHessian","text":"AbstractApproxHessian <: Function\n\nAn abstract supertype for approximate Hessian functions, declares them also to be functions.\n\n\n\n\n\n","category":"type"},{"location":"solvers/trust_regions/#sec-tr-technical-details","page":"Trust-Regions Solver","title":"Technical details","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"The trust_regions solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nBy default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nif you do not provide an initial max_trust_region_radius, a manifold_dimension is required.\nA `copyto!(M, q, p) and copy(M,p) for points.\nBy default the tangent vectors are initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/trust_regions/#Literature","page":"Trust-Regions Solver","title":"Literature","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).\n\n\n\nA. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).\n\n\n\n","category":"page"},{"location":"plans/debug/#sec-debug","page":"Debug Output","title":"Debug output","text":"","category":"section"},{"location":"plans/debug/","page":"Debug Output","title":"Debug Output","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/debug/","page":"Debug Output","title":"Debug Output","text":"Debug output can easily be added to any solver run. On the high level interfaces, like gradient_descent, you can just use the debug= keyword.","category":"page"},{"location":"plans/debug/","page":"Debug Output","title":"Debug Output","text":"Modules = [Manopt]\nPages = [\"plans/debug.jl\"]\nOrder = [:type, :function]\nPrivate = true","category":"page"},{"location":"plans/debug/#Manopt.DebugAction","page":"Debug Output","title":"Manopt.DebugAction","text":"DebugAction\n\nA DebugAction is a small functor to print/issue debug output. The usual call is given by (p::AbstractManoptProblem, s::AbstractManoptSolverState, k) -> s, where i is the current iterate.\n\nBy convention i=0 is interpreted as \"For Initialization only,\" only debug info that prints initialization reacts, i<0 triggers updates of variables internally but does not trigger any output.\n\nFields (assumed by subtypes to exist)\n\nprint method to perform the actual print. Can for example be set to a file export,\n\nor to @info. The default is the print function on the default Base.stdout.\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugChange","page":"Debug Output","title":"Manopt.DebugChange","text":"DebugChange(M=DefaultManifold(); kwargs...)\n\ndebug for the amount of change of the iterate (stored in get_iterate(o) of the AbstractManoptSolverState) during the last iteration. See DebugEntryChange for the general case\n\nKeyword parameters\n\nstorage=StoreStateAction( [:Gradient] ) storage of the previous action\nprefix=\"Last Change:\": prefix of the debug output (ignored if you set format)\nio=stdout: default stream to print the debug to.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\nthe inverse retraction to be used for approximating distance.\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugCost","page":"Debug Output","title":"Manopt.DebugCost","text":"DebugCost <: DebugAction\n\nprint the current cost function value, see get_cost.\n\nConstructors\n\nDebugCost()\n\nParameters\n\nformat=\"$prefix %f\": format to print the output\nio=stdout: default stream to print the debug to.\nlong=false: short form to set the format to f(x): (default) or current cost: and the cost\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugDivider","page":"Debug Output","title":"Manopt.DebugDivider","text":"DebugDivider <: DebugAction\n\nprint a small divider (default \" | \").\n\nConstructor\n\nDebugDivider(div,print)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugEntry","page":"Debug Output","title":"Manopt.DebugEntry","text":"DebugEntry <: DebugAction\n\nprint a certain fields entry during the iterates, where a format can be specified how to print the entry.\n\nAdditional fields\n\nfield: symbol the entry can be accessed with within AbstractManoptSolverState\n\nConstructor\n\nDebugEntry(f; prefix=\"$f:\", format = \"$prefix %s\", io=stdout)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugEntryChange","page":"Debug Output","title":"Manopt.DebugEntryChange","text":"DebugEntryChange{T} <: DebugAction\n\nprint a certain entries change during iterates\n\nAdditional fields\n\nprint: function to print the result\nprefix: prefix to the print out\nformat: format to print (uses the prefix by default and scientific notation)\nfield: Symbol the field can be accessed with within AbstractManoptSolverState\ndistance: function (p,o,x1,x2) to compute the change/distance between two values of the entry\nstorage: a StoreStateAction to store the previous value of :f\n\nConstructors\n\nDebugEntryChange(f,d)\n\nKeyword arguments\n\nio=stdout: an IOStream used for the debug\nprefix=\"Change of $f\": the prefix\nstorage=StoreStateAction((f,)): a StoreStateAction\ninitial_value=NaN: an initial value for the change of o.field.\nformat=\"$prefix %e\": format to print the change\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugEvery","page":"Debug Output","title":"Manopt.DebugEvery","text":"DebugEvery <: DebugAction\n\nevaluate and print debug only every kth iteration. Otherwise no print is performed. Whether internal variables are updates is determined by always_update.\n\nThis method does not perform any print itself but relies on it's children's print.\n\nIt also sets the subsolvers active parameter, see |DebugWhenActive}(#ref). Here, the activattion_offset can be used to specify whether it refers to this iteration, the ith, when this call is before the iteration, then the offset should be 0, for the next iteration, that is if this is called after an iteration, it has to be set to 1. Since usual debug is happening after the iteration, 1 is the default.\n\nConstructor\n\nDebugEvery(d::DebugAction, every=1, always_update=true, activation_offset=1)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugFeasibility","page":"Debug Output","title":"Manopt.DebugFeasibility","text":"DebugFeasibility <: DebugAction\n\nDisplay information about the feasibility of the current iterate\n\nFields\n\natol: absolute tolerance for when either equality or inequality constraints are counted as violated\nformat: a vector of symbols and string formatting the output\nio: default stream to print the debug to.\n\nThe following symbols are filled with values\n\n:Feasbile display true or false depending on whether the iterate is feasible\n:FeasbileEq display = or ≠ equality constraints are fulfilled or not\n:FeasbileInEq display ≤ or ≰ inequality constraints are fulfilled or not\n:NumEq display the number of equality constraints infeasible\n:NumEqNz display the number of equality constraints infeasible if exists\n:NumIneq display the number of inequality constraints infeasible\n:NumIneqNz display the number of inequality constraints infeasible if exists\n:TotalEq display the sum of how much the equality constraints are violated\n:TotalInEq display the sum of how much the inequality constraints are violated\n\nformat to print the output.\n\nConstructor\n\nDebugFeasibility( format=[\"feasible: \", :Feasible]; io::IO=stdout, atol=1e-13 )\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugGradientChange","page":"Debug Output","title":"Manopt.DebugGradientChange","text":"DebugGradientChange()\n\ndebug for the amount of change of the gradient (stored in get_gradient(o) of the AbstractManoptSolverState o) during the last iteration. See DebugEntryChange for the general case\n\nKeyword parameters\n\nstorage=StoreStateAction( (:Gradient,) ): storage of the action for previous data\nprefix=\"Last Change:\": prefix of the debug output (ignored if you set format:\nio=stdout: default stream to print the debug to.\nformat=\"$prefix %f\": format to print the output\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugGroup","page":"Debug Output","title":"Manopt.DebugGroup","text":"DebugGroup <: DebugAction\n\ngroup a set of DebugActions into one action, where the internal prints are removed by default and the resulting strings are concatenated\n\nConstructor\n\nDebugGroup(g)\n\nconstruct a group consisting of an Array of DebugActions g, that are evaluated en bloque; the method does not perform any print itself, but relies on the internal prints. It still concatenates the result and returns the complete string\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugIfEntry","page":"Debug Output","title":"Manopt.DebugIfEntry","text":"DebugIfEntry <: DebugAction\n\nIssue a warning, info, or error if a certain field does not pass a the check.\n\nThe message is printed in this case. If it contains a @printf argument identifier, that one is filled with the value of the field. That way you can print the value in this case as well.\n\nFields\n\nio: an IO stream\ncheck: a function that takes the value of the field as input and returns a boolean\nfield: symbol the entry can be accessed with within AbstractManoptSolverState\nmsg: if the check fails, this message is displayed\ntype: symbol specifying the type of display, possible values :print, : warn, :info, :error, where :print prints to io.\n\nConstructor\n\nDebugEntry(field, check=(>(0)); type=:warn, message=\":$f is nonnegative\", io=stdout)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugIterate","page":"Debug Output","title":"Manopt.DebugIterate","text":"DebugIterate <: DebugAction\n\ndebug for the current iterate (stored in get_iterate(o)).\n\nConstructor\n\nDebugIterate(; kwargs...)\n\nKeyword arguments\n\nio=stdout: default stream to print the debug to.\nformat=\"$prefix %s\": format how to print the current iterate\nlong=false: whether to have a long (\"current iterate:\") or a short (\"p:\") prefix default\nprefix: (see long for default) set a prefix to be printed before the iterate\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugIteration","page":"Debug Output","title":"Manopt.DebugIteration","text":"DebugIteration <: DebugAction\n\nConstructor\n\nDebugIteration()\n\nKeyword parameters\n\nformat=\"# %-6d\": format to print the output\nio=stdout: default stream to print the debug to.\n\ndebug for the current iteration (prefixed with # by )\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugMessages","page":"Debug Output","title":"Manopt.DebugMessages","text":"DebugMessages <: DebugAction\n\nAn AbstractManoptSolverState or one of its sub steps like a Stepsize might generate warnings throughout their computations. This debug can be used to :print them display them as :info or :warnings or even :error, depending on the message type.\n\nConstructor\n\nDebugMessages(mode=:Info, warn=:Once; io::IO=stdout)\n\nInitialize the messages debug to a certain mode. Available modes are\n\n:Error: issue the messages as an error and hence stop at any issue occurring\n:Info: issue the messages as an @info\n:Print: print messages to the steam io.\n:Warning: issue the messages as a warning\n\nThe warn level can be set to :Once to only display only the first message, to :Always to report every message, one can set it to :No, to deactivate this, then this DebugAction is inactive. All other symbols are handled as if they were :Always:\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugSolverState","page":"Debug Output","title":"Manopt.DebugSolverState","text":"DebugSolverState <: AbstractManoptSolverState\n\nThe debug state appends debug to any state, they act as a decorator pattern. Internally a dictionary is kept that stores a DebugAction for several occasions using a Symbol as reference.\n\nThe original options can still be accessed using the get_state function.\n\nFields\n\noptions: the options that are extended by debug information\ndebugDictionary: a Dict{Symbol,DebugAction} to keep track of Debug for different actions\n\nConstructors\n\nDebugSolverState(o,dA)\n\nconstruct debug decorated options, where dD can be\n\na DebugAction, then it is stored within the dictionary at :Iteration\nan Array of DebugActions.\na Dict{Symbol,DebugAction}.\nan Array of Symbols, String and an Int for the DebugFactory\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugStoppingCriterion","page":"Debug Output","title":"Manopt.DebugStoppingCriterion","text":"DebugStoppingCriterion <: DebugAction\n\nprint the Reason provided by the stopping criterion. Usually this should be empty, unless the algorithm stops.\n\nFields\n\nprefix=\"\": format to print the output\nio=stdout: default stream to print the debug to.\n\nConstructor\n\nDebugStoppingCriterion(prefix = \"\"; io::IO=stdout)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugTime","page":"Debug Output","title":"Manopt.DebugTime","text":"DebugTime()\n\nMeasure time and print the intervals. Using start=true you can start the timer on construction, for example to measure the runtime of an algorithm overall (adding)\n\nThe measured time is rounded using the given time_accuracy and printed after canonicalization.\n\nKeyword parameters\n\nio=stdout: default stream to print the debug to.\nformat=\"$prefix %s\": format to print the output, where %s is the canonicalized time`.\nmode=:cumulative: whether to display the total time or reset on every call using :iterative.\nprefix=\"Last Change:\": prefix of the debug output (ignored if you set format:\nstart=false: indicate whether to start the timer on creation or not. Otherwise it might only be started on first call.\ntime_accuracy=Millisecond(1): round the time to this period before printing the canonicalized time\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugWarnIfCostIncreases","page":"Debug Output","title":"Manopt.DebugWarnIfCostIncreases","text":"DebugWarnIfCostIncreases <: DebugAction\n\nprint a warning if the cost increases.\n\nNote that this provides an additional warning for gradient descent with its default constant step size.\n\nConstructor\n\nDebugWarnIfCostIncreases(warn=:Once; tol=1e-13)\n\nInitialize the warning to warning level (:Once) and introduce a tolerance for the test of 1e-13.\n\nThe warn level can be set to :Once to only warn the first time the cost increases, to :Always to report an increase every time it happens, and it can be set to :No to deactivate the warning, then this DebugAction is inactive. All other symbols are handled as if they were :Always:\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugWarnIfCostNotFinite","page":"Debug Output","title":"Manopt.DebugWarnIfCostNotFinite","text":"DebugWarnIfCostNotFinite <: DebugAction\n\nA debug to see when a field (value or array within the AbstractManoptSolverState is or contains values that are not finite, for example Inf or Nan.\n\nConstructor\n\nDebugWarnIfCostNotFinite(field::Symbol, warn=:Once)\n\nInitialize the warning to warn :Once.\n\nThis can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugWarnIfFieldNotFinite","page":"Debug Output","title":"Manopt.DebugWarnIfFieldNotFinite","text":"DebugWarnIfFieldNotFinite <: DebugAction\n\nA debug to see when a field from the options is not finite, for example Inf or Nan\n\nConstructor\n\nDebugWarnIfFieldNotFinite(field::Symbol, warn=:Once)\n\nInitialize the warning to warn :Once.\n\nThis can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:\n\nExample\n\nDebugWaranIfFieldNotFinite(:Gradient)\n\nCreates a [DebugAction] to track whether the gradient does not get Nan or Inf.\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugWarnIfGradientNormTooLarge","page":"Debug Output","title":"Manopt.DebugWarnIfGradientNormTooLarge","text":"DebugWarnIfGradientNormTooLarge{T} <: DebugAction\n\nA debug to warn when an evaluated gradient at the current iterate is larger than (a factor times) the maximal (recommended) stepsize at the current iterate.\n\nConstructor\n\nDebugWarnIfGradientNormTooLarge(factor::T=1.0, warn=:Once)\n\nInitialize the warning to warn :Once.\n\nThis can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:\n\nExample\n\nDebugWaranIfFieldNotFinite(:Gradient)\n\nCreates a [DebugAction] to track whether the gradient does not get Nan or Inf.\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugWhenActive","page":"Debug Output","title":"Manopt.DebugWhenActive","text":"DebugWhenActive <: DebugAction\n\nevaluate and print debug only if the active boolean is set. This can be set from outside and is for example triggered by DebugEvery on debugs on the subsolver.\n\nThis method does not perform any print itself but relies on it's children's prints.\n\nFor now, the main interaction is with DebugEvery which might activate or deactivate this debug\n\nFields\n\nactive: a boolean that can (de-)activated from outside to turn on/off debug\nalways_update: whether or not to call the order debugs with iteration <=0 inactive state\n\nConstructor\n\nDebugWhenActive(d::DebugAction, active=true, always_update=true)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugActionFactory-Tuple{String}","page":"Debug Output","title":"Manopt.DebugActionFactory","text":"DebugActionFactory(s)\n\ncreate a DebugAction where\n\na Stringyields the corresponding divider\na DebugAction is passed through\na [Symbol] creates DebugEntry of that symbol, with the exceptions of :Change, :Iterate, :Iteration, and :Cost.\na Tuple{Symbol,String} creates a DebugEntry of that symbol where the String specifies the format.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.DebugActionFactory-Tuple{Symbol}","page":"Debug Output","title":"Manopt.DebugActionFactory","text":"DebugActionFactory(s::Symbol)\n\nConvert certain Symbols in the debug=[ ... ] vector to DebugActions Currently the following ones are done. Note that the Shortcut symbols should all start with a capital letter.\n\n:Cost creates a DebugCost\n:Change creates a DebugChange\n:Gradient creates a DebugGradient\n:GradientChange creates a DebugGradientChange\n:GradientNorm creates a DebugGradientNorm\n:Iterate creates a DebugIterate\n:Iteration creates a DebugIteration\n:IterativeTime creates a DebugTime(:Iterative)\n:Stepsize creates a DebugStepsize\n:Stop creates a StoppingCriterion()\n:WarnCost creates a DebugWarnIfCostNotFinite\n:WarnGradient creates a DebugWarnIfFieldNotFinite for the ::Gradient.\n:WarnBundle creates a DebugWarnIfLagrangeMultiplierIncreases\n:Time creates a DebugTime\n:WarningMessages creates a DebugMessages(:Warning)\n:InfoMessages creates a DebugMessages(:Info)\n:ErrorMessages creates a DebugMessages(:Error)\n:Messages creates a DebugMessages() (the same as :InfoMessages)\n\nany other symbol creates a DebugEntry(s) to print the entry (o.:s) from the options.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.DebugActionFactory-Tuple{Tuple{Symbol, Any}}","page":"Debug Output","title":"Manopt.DebugActionFactory","text":"DebugActionFactory(t::Tuple{Symbol,String)\n\nConvert certain Symbols in the debug=[ ... ] vector to DebugActions Currently the following ones are done, where the string in t[2] is passed as the format the corresponding debug. Note that the Shortcut symbols t[1] should all start with a capital letter.\n\n:Change creates a DebugChange\n:Cost creates a DebugCost\n:Gradient creates a DebugGradient\n:GradientChange creates a DebugGradientChange\n:GradientNorm creates a DebugGradientNorm\n:Iterate creates a DebugIterate\n:Iteration creates a DebugIteration\n:Stepsize creates a DebugStepsize\n:Stop creates a DebugStoppingCriterion\n:Time creates a DebugTime\n:IterativeTime creates a DebugTime(:Iterative)\n\nany other symbol creates a DebugEntry(s) to print the entry (o.:s) from the options.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.DebugFactory-Tuple{Vector}","page":"Debug Output","title":"Manopt.DebugFactory","text":"DebugFactory(a::Vector)\n\nGenerate a dictionary of DebugActions.\n\nFirst all Symbols String, DebugActions and numbers are collected, excluding :Stop and :WhenActive. This collected vector is added to the :Iteration => [...] pair. :Stop is added as :StoppingCriterion to the :Stop => [...] pair. If necessary, these pairs are created\n\nFor each Pair of a Symbol and a Vector, the DebugGroupFactory is called for the Vector and the result is added to the debug dictionary's entry with said symbol. This is wrapped into the DebugWhenActive, when the :WhenActive symbol is present\n\nReturn value\n\nA dictionary for the different enrty points where debug can happen, each containing a DebugAction to call.\n\nNote that upon the initialisation all dictionaries but the :StartAlgorithm one are called with an i=0 for reset.\n\nExamples\n\nProviding a simple vector of symbols, numbers and strings like\n[:Iterate, \" | \", :Cost, :Stop, 10]\nAdds a group to :Iteration of three actions (DebugIteration, DebugDivider(\" | \"), and[DebugCost](@ref)) as a [DebugGroup](@ref) inside an [DebugEvery](@ref) to only be executed every 10th iteration. It also adds the [DebugStoppingCriterion](@ref) to the:EndAlgorithm` entry of the dictionary.\nThe same can also be written a bit more precise as\nDebugFactory([:Iteration => [:Iterate, \" | \", :Cost, 10], :Stop])\nWe can even make the stoping criterion concrete and pass Actions directly, for example explicitly Making the stop more concrete, we get\nDebugFactory([:Iteration => [:Iterate, \" | \", DebugCost(), 10], :Stop => [:Stop]])\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.DebugGroupFactory-Tuple{Vector}","page":"Debug Output","title":"Manopt.DebugGroupFactory","text":"DebugGroupFactory(a::Vector)\n\nGenerate a DebugGroup of DebugActions. The following rules are used\n\nAny Symbol is passed to DebugActionFactory\nAny (Symbol, String) generates similar actions as in 1., but the string is used for format=, see DebugActionFactory\nAny String is passed to DebugActionFactory\nAny DebugAction is included as is.\n\nIf this results in more than one DebugAction a DebugGroup of these is build.\n\nIf any integers are present, the last of these is used to wrap the group in a DebugEvery(k).\n\nIf :WhenActive is present, the resulting Action is wrapped in DebugWhenActive, making it deactivatable by its parent solver.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.reset!-Tuple{DebugTime}","page":"Debug Output","title":"Manopt.reset!","text":"reset!(d::DebugTime)\n\nreset the internal time of a DebugTime, that is start from now again.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.set_parameter!-Tuple{DebugSolverState, Val{:Debug}, Vararg{Any}}","page":"Debug Output","title":"Manopt.set_parameter!","text":"set_parameter!(ams::DebugSolverState, ::Val{:Debug}, args...)\n\nSet certain values specified by args... into the elements of the debugDictionary\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.stop!-Tuple{DebugTime}","page":"Debug Output","title":"Manopt.stop!","text":"stop!(d::DebugTime)\n\nstop the reset the internal time of a DebugTime, that is set the time to 0 (undefined)\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Technical-details","page":"Debug Output","title":"Technical details","text":"","category":"section"},{"location":"plans/debug/","page":"Debug Output","title":"Debug Output","text":"The decorator to print debug during the iterations can be activated by decorating the state of a solver and implementing your own DebugActions. For example printing a gradient from the GradientDescentState is automatically available, as explained in the gradient_descent solver.","category":"page"},{"location":"plans/debug/","page":"Debug Output","title":"Debug Output","text":"initialize_solver!(amp::AbstractManoptProblem, dss::DebugSolverState)\nstep_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)\nstop_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k::Int)","category":"page"},{"location":"plans/debug/#Manopt.initialize_solver!-Tuple{AbstractManoptProblem, DebugSolverState}","page":"Debug Output","title":"Manopt.initialize_solver!","text":"initialize_solver!(amp::AbstractManoptProblem, dss::DebugSolverState)\n\nExtend the initialization of the solver by a hook to run the DebugAction that was added to the :Start entry of the debug lists. All others are triggered (with iteration number 0) to trigger possible resets\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.step_solver!-Tuple{AbstractManoptProblem, DebugSolverState, Any}","page":"Debug Output","title":"Manopt.step_solver!","text":"step_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)\n\nExtend the ith step of the solver by a hook to run debug prints, that were added to the :BeforeIteration and :Iteration entries of the debug lists.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.stop_solver!-Tuple{AbstractManoptProblem, DebugSolverState, Int64}","page":"Debug Output","title":"Manopt.stop_solver!","text":"stop_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)\n\nExtend the stop_solver!, whether to stop the solver by a hook to run debug, that were added to the :Stop entry of the debug lists.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Stepsize","page":"Stepsize","title":"Stepsize and line search","text":"","category":"section"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Most iterative algorithms determine a direction along which the algorithm shall proceed and determine a step size to find the next iterate. How advanced the step size computation can be implemented depends (among others) on the properties the corresponding problem provides.","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Within Manopt.jl, the step size determination is implemented as a functor which is a subtype of Stepsize based on","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Stepsize","category":"page"},{"location":"plans/stepsize/#Manopt.Stepsize","page":"Stepsize","title":"Manopt.Stepsize","text":"Stepsize\n\nAn abstract type for the functors representing step sizes. These are callable structures. The naming scheme is TypeOfStepSize, for example ConstantStepsize.\n\nEvery Stepsize has to provide a constructor and its function has to have the interface (p,o,i) where a AbstractManoptProblem as well as AbstractManoptSolverState and the current number of iterations are the arguments and returns a number, namely the stepsize to use.\n\nFor most it is adviable to employ a ManifoldDefaultsFactory. Then the function creating the factory should either be called TypeOf or if that is confusing or too generic, TypeOfLength\n\nSee also\n\nLinesearch\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Usually, a constructor should take the manifold M as its first argument, for consistency, to allow general step size functors to be set up based on default values that might depend on the manifold currently under consideration.","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Currently, the following step sizes are available","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"AdaptiveWNGradient\nArmijoLinesearch\nConstantLength\nDecreasingLength\nNonmonotoneLinesearch\nPolyak\nWolfePowellLinesearch\nWolfePowellBinaryLinesearch","category":"page"},{"location":"plans/stepsize/#Manopt.AdaptiveWNGradient","page":"Stepsize","title":"Manopt.AdaptiveWNGradient","text":"AdaptiveWNGradient(; kwargs...)\nAdaptiveWNGradient(M::AbstractManifold; kwargs...)\n\nA stepsize based on the adaptive gradient method introduced by [GS23].\n\nGiven a positive threshold hatc ℕ, an minimal bound b_textmin 0, an initial b_0 b_textmin, and a gradient reduction factor threshold α 01).\n\nSet c_0=0 and use ω_0 = lVert operatornamegrad f(p_0) rVert_p_0.\n\nFor the first iterate use the initial step size s_0 = frac1b_0.\n\nThen, given the last gradient X_k-1 = operatornamegrad f(x_k-1), and a previous ω_k-1, the values (b_k ω_k c_k) are computed using X_k = operatornamegrad f(p_k) and the following cases\n\nIf lVert X_k rVert_p_k αω_k-1, then let hatb_k-1 b_textminb_k-1 and set\n\n(b_k ω_k c_k) = begincases\n bigl(hatb_k-1 lVert X_k rVert_p_k 0 bigr) text if c_k-1+1 = hatc\n bigl( b_k-1 + fraclVert X_k rVert_p_k^2b_k-1 ω_k-1 c_k-1+1 Bigr) text if c_k-1+1hatc\nendcases\n\nIf lVert X_k rVert_p_k αω_k-1, the set\n\n(b_k ω_k c_k) = Bigl( b_k-1 + fraclVert X_k rVert_p_k^2b_k-1 ω_k-1 0 Bigr)\n\nand return the step size s_k = frac1b_k.\n\nNote that for α=0 this is the Riemannian variant of WNGRad.\n\nKeyword arguments\n\nadaptive=true: switches the gradient_reductionα(iftrue) to0`.\nalternate_bound = (bk, hat_c) -> min(gradient_bound == 0 ? 1.0 : gradient_bound, max(minimal_bound, bk / (3 * hat_c)): how to determine hatk_k as a function of (bmin, bk, hat_c) -> hat_bk\ncount_threshold=4: an Integer for hatc\ngradient_reduction::R=adaptive ? 0.9 : 0.0: the gradient reduction factor threshold α 01)\ngradient_bound=norm(M, p, X): the bound b_k.\nminimal_bound=1e-4: the value b_textmin\np=rand(M): a point on the manifold mathcal Monly used to define the gradient_bound\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Monly used to define the gradient_bound\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.ArmijoLinesearch","page":"Stepsize","title":"Manopt.ArmijoLinesearch","text":"ArmijoLinesearch(; kwargs...)\nArmijoLinesearch(M::AbstractManifold; kwargs...)\n\nSpecify a step size that performs an Armijo line search. Given a Function fmathcal Mℝ and its Riemannian Gradient operatornamegradf mathcal MTmathcal M, the curent point pmathcal M and a search direction XT_pmathcal M.\n\nThen the step size s is found by reducing the initial step size s until\n\nf(operatornameretr_p(sX)) f(p) - τs X operatornamegradf(p) _p\n\nis fulfilled. for a sufficient decrease value τ (01).\n\nTo be a bit more optimistic, if s already fulfils this, a first search is done, increasing the given s until for a first time this step does not hold.\n\nOverall, we look for step size, that provides enough decrease, see [Bou23, p. 58] for more information.\n\nKeyword arguments\n\nadditional_decrease_condition=(M, p) -> true: specify an additional criterion that has to be met to accept a step size in the decreasing loop\nadditional_increase_condition::IF=(M, p) -> true: specify an additional criterion that has to be met to accept a step size in the (initial) increase loop\ncandidate_point=allocate_result(M, rand): speciy a point to be used as memory for the candidate points.\ncontraction_factor=0.95: how to update s in the decrease step\ninitial_stepsize=1.0`: specify an initial step size\ninitial_guess=armijo_initial_guess: Compute the initial step size of a line search based on this function. The funtion required is (p,s,k,l) -> α and computes the initial step size α based on a AbstractManoptProblem p, AbstractManoptSolverState s, the current iterate k and a last step size l.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: a safeguard, stop when the decreasing step is below this (nonnegative) bound.\nstop_when_stepsize_exceeds=max_stepsize(M): a safeguard to not choose a too long step size when initially increasing\nstop_increasing_at_step=100: stop the initial increasing loop after this amount of steps. Set to 0 to never increase in the beginning\nstop_decreasing_at_step=1000: maximal number of Armijo decreases / tests to perform\nsufficient_decrease=0.1: the sufficient decrease parameter τ\n\nFor the stop safe guards you can pass :Messages to a debug= to see @info messages when these happen.\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for ArmijoLinesearchStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.ConstantLength","page":"Stepsize","title":"Manopt.ConstantLength","text":"ConstantLength(s; kwargs...)\nConstantLength(M::AbstractManifold, s; kwargs...)\n\nSpecify a Stepsize that is constant.\n\nInput\n\nM (optional)\n\ns=min( injectivity_radius(M)/2, 1.0) : the length to use.\n\nKeyword argument\n\ntype::Symbol=relative specify the type of constant step size.\n:relative – scale the gradient tangent vector X to s*X\n:absolute – scale the gradient to an absolute step length s, that is fracslVert X rVert_X\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for ConstantStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.DecreasingLength","page":"Stepsize","title":"Manopt.DecreasingLength","text":"DegreasingLength(; kwargs...)\nDecreasingLength(M::AbstractManifold; kwargs...)\n\nSpecify a [Stepsize] that is decreasing as ``s_k = \\frac{(l - ak)f^i}{(k+s)^e} with the following\n\nKeyword arguments\n\nexponent=1.0: the exponent e in the denominator\nfactor=1.0: the factor f in the nominator\nlength=min(injectivity_radius(M)/2, 1.0): the initial step size l.\nsubtrahend=0.0: a value a that is subtracted every iteration\nshift=0.0: shift the denominator iterator k by s.\ntype::Symbol=relative specify the type of constant step size.\n:relative – scale the gradient tangent vector X to s_k*X\n:absolute – scale the gradient to an absolute step length s_k, that is fracs_klVert X rVert_X\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for DecreasingStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.NonmonotoneLinesearch","page":"Stepsize","title":"Manopt.NonmonotoneLinesearch","text":"NonmonotoneLinesearch(; kwargs...)\nNonmonotoneLinesearch(M::AbstractManifold; kwargs...)\n\nA functor representing a nonmonotone line search using the Barzilai-Borwein step size [IP17].\n\nThis method first computes\n\n(x -> p, F-> f)\n\ny_k = operatornamegradf(p_k) - mathcal T_p_kp_k-1operatornamegradf(p_k-1)\n\nand\n\ns_k = - α_k-1 mathcal T_p_kp_k-1operatornamegradf(p_k-1)\n\nwhere α_k-1 is the step size computed in the last iteration and mathcal T_ is a vector transport. Then the Barzilai—Borwein step size is\n\nα_k^textBB = begincases\n min(α_textmax max(α_textmin τ_k)) textif s_k y_k_p_k 0\n α_textmax textelse\nendcases\n\nwhere\n\nτ_k = fracs_k s_k_p_ks_k y_k_p_k\n\nif the direct strategy is chosen, or\n\nτ_k = fracs_k y_k_p_ky_k y_k_p_k\n\nin case of the inverse strategy or an alternation between the two in cases for the alternating strategy. Then find the smallest h = 0 1 2 such that\n\nf(operatornameretr_p_k(- σ^h α_k^textBB operatornamegradf(p_k))) \nmax_1 j max(k+1m) f(p_k+1-j) - γ σ^h α_k^textBB operatornamegradF(p_k) operatornamegradF(p_k)_p_k\n\nwhere σ (01) is a step length reduction factor , m is the number of iterations after which the function value has to be lower than the current one and γ (01) is the sufficient decrease parameter. Finally the step size is computed as\n\nα_k = σ^h α_k^textBB\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal Mto store an interim result\np=allocate_result(M, rand): to store an interim result\ninitial_stepsize=1.0: the step size to start the search with\nmemory_size=10: number of iterations after which the cost value needs to be lower than the current one\nbb_min_stepsize=1e-3: lower bound for the Barzilai-Borwein step size greater than zero\nbb_max_stepsize=1e3: upper bound for the Barzilai-Borwein step size greater than min_stepsize\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstrategy=direct: defines if the new step size is computed using the :direct, :indirect or :alternating strategy\nstorage=StoreStateAction(M; store_fields=[:Iterate, :Gradient]): increase efficiency by using a StoreStateAction for :Iterate and :Gradient.\nstepsize_reduction=0.5: step size reduction factor contained in the interval (01)\nsufficient_decrease=1e-4: sufficient decrease parameter contained in the interval (01)\nstop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)\nstop_when_stepsize_exceeds=max_stepsize(M, p)): largest stepsize when to stop to avoid leaving the injectivity radius\nstop_increasing_at_step=100: last step to increase the stepsize (phase 1),\nstop_decreasing_at_step=1000: last step size to decrease the stepsize (phase 2),\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.Polyak","page":"Stepsize","title":"Manopt.Polyak","text":"Polyak(; kwargs...)\nPolyak(M::AbstractManifold; kwargs...)\n\nCompute a step size according to a method propsed by Polyak, cf. the Dynamic step size discussed in Section 3.2 of [Ber15]. This has been generalised here to both the Riemannian case and to approximate the minimum cost value.\n\nLet f_textbest be the best cost value seen until now during some iterative optimisation algorithm and let γ_k be a sequence of numbers that is square summable, but not summable.\n\nThen the step size computed here reads\n\ns_k = fracf(p^(k)) - f_textbest + γ_klVert f(p^(k)) rVert_\n\nwhere f denotes a nonzero-subgradient of f at the current iterate p^(k).\n\nConstructor\n\nPolyak(; γ = k -> 1/k, initial_cost_estimate=0.0)\n\ninitialize the Polyak stepsize to a certain sequence and an initial estimate of f_\textbest.\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for PolyakStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.WolfePowellLinesearch","page":"Stepsize","title":"Manopt.WolfePowellLinesearch","text":"WolfePowellLinesearch(; kwargs...)\nWolfePowellLinesearch(M::AbstractManifold; kwargs...)\n\nPerform a lineseach to fulfull both the Armijo-Goldstein conditions\n\nfbigl( operatornameretr_p(αX) bigr) f(p) + c_1 α_k operatornamegrad f(p) X_p\n\nas well as the Wolfe conditions\n\nfracmathrmdmathrmdt fbigl(operatornameretr_p(tX)bigr)\nBigvert_t=α\n c_2 fracmathrmdmathrmdt fbigl(operatornameretr_p(tX)bigr)Bigvert_t=0\n\nfor some given sufficient decrease coefficient c_1 and some sufficient curvature condition coefficientc_2.\n\nThis is adopted from [NW06, Section 3.1]\n\nKeyword arguments\n\nsufficient_decrease=10^(-4)\nsufficient_curvature=0.999\np::P: a point on the manifold mathcal Mas temporary storage for candidates\nX::T: a tangent vector at the point p on the manifold mathcal Mas type of memory allocated for the candidates direction and tangent\nmax_stepsize=max_stepsize(M, p): largest stepsize allowed here.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.WolfePowellBinaryLinesearch","page":"Stepsize","title":"Manopt.WolfePowellBinaryLinesearch","text":"WolfePowellBinaryLinesearch(; kwargs...)\nWolfePowellBinaryLinesearch(M::AbstractManifold; kwargs...)\n\nPerform a lineseach to fulfull both the Armijo-Goldstein conditions for some given sufficient decrease coefficient c_1 and some sufficient curvature condition coefficientc_2. Compared to WolfePowellLinesearch which tries a simpler method, this linesearch performs the following algorithm\n\nWith\n\nA(t) = f(p_+) c_1 t operatornamegradf(p) X_x\nquadtext and quad\nW(t) = operatornamegradf(x_+) mathcal T_p_+pX_p_+ c_2 X operatornamegradf(x)_x\n\nwhere p_+ =operatornameretr_p(tX) is the current trial point, and mathcal T_ denotes a vector transport. Then the following Algorithm is performed similar to Algorithm 7 from [Hua14]\n\nset α=0, β= and t=1.\nWhile either A(t) does not hold or W(t) does not hold do steps 3-5.\nIf A(t) fails, set β=t.\nIf A(t) holds but W(t) fails, set α=t.\nIf β set t=fracα+β2, otherwise set t=2α.\n\nKeyword arguments\n\nsufficient_decrease=10^(-4)\nsufficient_curvature=0.999\nmax_stepsize=max_stepsize(M, p): largest stepsize allowed here.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Some step sizes use max_stepsize function as a rough upper estimate for the trust region size. It is by default equal to injectivity radius of the exponential map but in some cases a different value is used. For the FixedRankMatrices manifold an estimate from Manopt is used. Tangent bundle with the Sasaki metric has 0 injectivity radius, so the maximum stepsize of the underlying manifold is used instead. Hyperrectangle also has 0 injectivity radius and an estimate based on maximum of dimensions along each index is used instead. For manifolds with corners, however, a line search capable of handling break points along the projected search direction should be used, and such algorithms do not call max_stepsize.","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Internally these step size functions create a ManifoldDefaultsFactory. Internally these use","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Modules = [Manopt]\nPages = [\"plans/stepsize.jl\"]\nPrivate = true\nOrder = [:function, :type]\nFilter = t -> !(t in [Stepsize, AdaptiveWNGradient, ArmijoLinesearch, ConstantLength, DecreasingLength, NonmonotoneLinesearch, Polyak, WolfePowellLinesearch, WolfePowellBinaryLinesearch ])","category":"page"},{"location":"plans/stepsize/#Manopt.armijo_initial_guess-Tuple{AbstractManoptProblem, AbstractManoptSolverState, Int64, Real}","page":"Stepsize","title":"Manopt.armijo_initial_guess","text":"armijo_initial_guess(mp::AbstractManoptProblem, s::AbstractManoptSolverState, k, l)\n\nInput\n\nmp: the AbstractManoptProblem we are aiminig to minimize\ns: the AbstractManoptSolverState for the current solver\nk: the current iteration\nl: the last step size computed in the previous iteration.\n\nReturn an initial guess for the ArmijoLinesearchStepsize.\n\nThe default provided is based on the max_stepsize(M), which we denote by m. Let further X be the current descent direction with norm n=lVert X rVert_p its length. Then this (default) initial guess returns\n\nl if m is not finite\nmin(l fracmn) otherwise\n\nThis ensures that the initial guess does not yield to large (initial) steps.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.default_stepsize-Tuple{AbstractManifold, Type{<:AbstractManoptSolverState}}","page":"Stepsize","title":"Manopt.default_stepsize","text":"default_stepsize(M::AbstractManifold, ams::AbstractManoptSolverState)\n\nReturns the default Stepsize functor used when running the solver specified by the AbstractManoptSolverState ams running with an objective on the AbstractManifold M.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.get_last_stepsize-Tuple{AbstractManoptProblem, AbstractManoptSolverState, Vararg{Any}}","page":"Stepsize","title":"Manopt.get_last_stepsize","text":"get_last_stepsize(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, vars...)\n\nreturn the last computed stepsize stored within AbstractManoptSolverState ams when solving the AbstractManoptProblem amp.\n\nThis method takes into account that ams might be decorated. In case this returns NaN, a concrete call to the stored stepsize is performed. For this, usually, the first of the vars... should be the current iterate.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.get_last_stepsize-Tuple{Stepsize, Vararg{Any}}","page":"Stepsize","title":"Manopt.get_last_stepsize","text":"get_last_stepsize(::Stepsize, vars...)\n\nreturn the last computed stepsize from within the stepsize. If no last step size is stored, this returns NaN.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.get_stepsize-Tuple{AbstractManoptProblem, AbstractManoptSolverState, Vararg{Any}}","page":"Stepsize","title":"Manopt.get_stepsize","text":"get_stepsize(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, vars...)\n\nreturn the stepsize stored within AbstractManoptSolverState ams when solving the AbstractManoptProblem amp. This method also works for decorated options and the Stepsize function within the options, by default stored in ams.stepsize.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.linesearch_backtrack!-Union{Tuple{T}, Tuple{TF}, Tuple{AbstractManifold, Any, TF, Any, T, Any, Any, Any}, Tuple{AbstractManifold, Any, TF, Any, T, Any, Any, Any, T}, Tuple{AbstractManifold, Any, TF, Any, T, Any, Any, Any, T, Any}} where {TF, T}","page":"Stepsize","title":"Manopt.linesearch_backtrack!","text":"(s, msg) = linesearch_backtrack!(M, q, F, p, X, s, decrease, contract η = -X, f0 = f(p))\n\nPerform a line search backtrack in-place of q. For all details and options, see linesearch_backtrack\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.linesearch_backtrack-Union{Tuple{T}, Tuple{AbstractManifold, Any, Any, T, Any, Any, Any}, Tuple{AbstractManifold, Any, Any, T, Any, Any, Any, T}, Tuple{AbstractManifold, Any, Any, T, Any, Any, Any, T, Any}} where T","page":"Stepsize","title":"Manopt.linesearch_backtrack","text":"(s, msg) = linesearch_backtrack(M, F, p, X, s, decrease, contract η = -X, f0 = f(p); kwargs...)\n(s, msg) = linesearch_backtrack!(M, q, F, p, X, s, decrease, contract η = -X, f0 = f(p); kwargs...)\n\nperform a line search\n\non manifold M\nfor the cost function f,\nat the current point p\nwith current gradient provided in X\nan initial stepsize s\na sufficient decrease\na contraction factor σ\na search direction η = -X\nan offset, f_0 = F(x)\n\nKeyword arguments\n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: to avoid numerical underflow\nstop_when_stepsize_exceeds=max_stepsize(M, p) / norm(M, p, η)) to avoid leaving the injectivity radius on a manifold\nstop_increasing_at_step=100: stop the initial increase of step size after these many steps\nstop_decreasing_at_step=1000`: stop the decreasing search after these many steps\nadditional_increase_condition=(M,p) -> true: impose an additional condition for an increased step size to be accepted\nadditional_decrease_condition=(M,p) -> true: impose an additional condition for an decreased step size to be accepted\n\nThese keywords are used as safeguards, where only the max stepsize is a very manifold specific one.\n\nReturn value\n\nA stepsize s and a message msg (in case any of the 4 criteria hit)\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.max_stepsize-Tuple{AbstractManifold, Any}","page":"Stepsize","title":"Manopt.max_stepsize","text":"max_stepsize(M::AbstractManifold, p)\nmax_stepsize(M::AbstractManifold)\n\nGet the maximum stepsize (at point p) on manifold M. It should be used to limit the distance an algorithm is trying to move in a single step.\n\nBy default, this returns injectivity_radius(M), if this exists. If this is not available on the the method returns Inf.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.AdaptiveWNGradientStepsize","page":"Stepsize","title":"Manopt.AdaptiveWNGradientStepsize","text":"AdaptiveWNGradientStepsize{I<:Integer,R<:Real,F<:Function} <: Stepsize\n\nA functor problem, state, k, X) -> s to an adaptive gradient method introduced by [GrapigliaStella:2023](@cite). See [AdaptiveWNGradient`](@ref) for the mathematical details.\n\nFields\n\ncount_threshold::I: an Integer for hatc\nminimal_bound::R: the value for b_textmin\nalternate_bound::F: how to determine hatk_k as a function of (bmin, bk, hat_c) -> hat_bk\ngradient_reduction::R: the gradient reduction factor threshold α 01)\ngradient_bound::R: the bound b_k.\nweight::R: ω_k initialised to ω_0 =norm(M, p, X) if this is not zero, 1.0 otherwise.\ncount::I: c_k, initialised to c_0 = 0.\n\nConstructor\n\nAdaptiveWNGrad(M::AbstractManifold; kwargs...)\n\nKeyword arguments\n\nadaptive=true: switches the gradient_reductionα(iftrue) to0`.\nalternate_bound = (bk, hat_c) -> min(gradient_bound == 0 ? 1.0 : gradient_bound, max(minimal_bound, bk / (3 * hat_c))\ncount_threshold=4\ngradient_reduction::R=adaptive ? 0.9 : 0.0\ngradient_bound=norm(M, p, X)\nminimal_bound=1e-4\np=rand(M): a point on the manifold mathcal Monly used to define the gradient_bound\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Monly used to define the gradient_bound\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.ArmijoLinesearchStepsize","page":"Stepsize","title":"Manopt.ArmijoLinesearchStepsize","text":"ArmijoLinesearchStepsize <: Linesearch\n\nA functor problem, state, k, X) -> s to provide an Armijo line search to compute step size, based on the search directionX`\n\nFields\n\ncandidate_point: to store an interim result\ninitial_stepsize: and initial step size\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ncontraction_factor: exponent for line search reduction\nsufficient_decrease: gain within Armijo's rule\nlast_stepsize: the last step size to start the search with\ninitial_guess: a function to provide an initial guess for the step size, it maps (m,p,k,l) -> α based on a AbstractManoptProblem p, AbstractManoptSolverState s, the current iterate k and a last step size l. It returns the initial guess α.\nadditional_decrease_condition: specify a condition a new point has to additionally fulfill. The default accepts all points.\nadditional_increase_condition: specify a condtion that additionally to checking a valid increase has to be fulfilled. The default accepts all points.\nstop_when_stepsize_less: smallest stepsize when to stop (the last one before is taken)\nstop_when_stepsize_exceeds: largest stepsize when to stop.\nstop_increasing_at_step: last step to increase the stepsize (phase 1),\nstop_decreasing_at_step: last step size to decrease the stepsize (phase 2),\n\nPass :Messages to a debug= to see @infos when these happen.\n\nConstructor\n\nArmijoLinesearchStepsize(M::AbstractManifold; kwarg...)\n\nwith the fields keyword arguments and the retraction is set to the default retraction on M.\n\nKeyword arguments\n\ncandidate_point=(allocate_result(M, rand))\ninitial_stepsize=1.0\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ncontraction_factor=0.95\nsufficient_decrease=0.1\nlast_stepsize=initialstepsize\ninitial_guess=armijo_initial_guess– (p,s,i,l) -> l\nstop_when_stepsize_less=0.0: stop when the stepsize decreased below this version.\nstop_when_stepsize_exceeds=[max_step](@ref)(M)`: provide an absolute maximal step size.\nstop_increasing_at_step=100: for the initial increase test, stop after these many steps\nstop_decreasing_at_step=1000: in the backtrack, stop after these many steps\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.ConstantStepsize","page":"Stepsize","title":"Manopt.ConstantStepsize","text":"ConstantStepsize <: Stepsize\n\nA functor (problem, state, ...) -> s to provide a constant step size s.\n\nFields\n\nlength: constant value for the step size\ntype: a symbol that indicates whether the stepsize is relatively (:relative), with respect to the gradient norm, or absolutely (:absolute) constant.\n\nConstructors\n\nConstantStepsize(s::Real, t::Symbol=:relative)\n\ninitialize the stepsize to a constant s of type t.\n\nConstantStepsize(\n M::AbstractManifold=DefaultManifold(),\n s=min(1.0, injectivity_radius(M)/2);\n type::Symbol=:relative\n)\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.DecreasingStepsize","page":"Stepsize","title":"Manopt.DecreasingStepsize","text":"DecreasingStepsize()\n\nA functor (problem, state, ...) -> s to provide a constant step size s.\n\nFields\n\nexponent: a value e the current iteration numbers eth exponential is taken of\nfactor: a value f to multiply the initial step size with every iteration\nlength: the initial step size l.\nsubtrahend: a value a that is subtracted every iteration\nshift: shift the denominator iterator i by s`.\ntype: a symbol that indicates whether the stepsize is relatively (:relative), with respect to the gradient norm, or absolutely (:absolute) constant.\n\nIn total the complete formulae reads for the ith iterate as\n\ns_i = frac(l - i a)f^i(i+s)^e\n\nand hence the default simplifies to just s_i = \fracli\n\nConstructor\n\nDecreasingStepsize(M::AbstractManifold;\n length=min(injectivity_radius(M)/2, 1.0),\n factor=1.0,\n subtrahend=0.0,\n exponent=1.0,\n shift=0.0,\n type=:relative,\n)\n\ninitializes all fields, where none of them is mandatory and the length is set to half and to 1 if the injectivity radius is infinite.\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.Linesearch","page":"Stepsize","title":"Manopt.Linesearch","text":"Linesearch <: Stepsize\n\nAn abstract functor to represent line search type step size determinations, see Stepsize for details. One example is the ArmijoLinesearchStepsize functor.\n\nCompared to simple step sizes, the line search functors provide an interface of the form (p,o,i,X) -> s with an additional (but optional) fourth parameter to provide a search direction; this should default to something reasonable, most prominently the negative gradient.\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.NonmonotoneLinesearchStepsize","page":"Stepsize","title":"Manopt.NonmonotoneLinesearchStepsize","text":"NonmonotoneLinesearchStepsize{P,T,R<:Real} <: Linesearch\n\nA functor representing a nonmonotone line search using the Barzilai-Borwein step size [IP17].\n\nFields\n\ninitial_stepsize=1.0: the step size to start the search with\nmemory_size=10: number of iterations after which the cost value needs to be lower than the current one\nbb_min_stepsize=1e-3: lower bound for the Barzilai-Borwein step size greater than zero\nbb_max_stepsize=1e3: upper bound for the Barzilai-Borwein step size greater than min_stepsize\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstrategy=direct: defines if the new step size is computed using the :direct, :indirect or :alternating strategy\nstorage: (for :Iterate and :Gradient) a StoreStateAction\nstepsize_reduction: step size reduction factor contained in the interval (0,1)\nsufficient_decrease: sufficient decrease parameter contained in the interval (0,1)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\ncandidate_point: to store an interim result\nstop_when_stepsize_less: smallest stepsize when to stop (the last one before is taken)\nstop_when_stepsize_exceeds: largest stepsize when to stop.\nstop_increasing_at_step: last step to increase the stepsize (phase 1),\nstop_decreasing_at_step: last step size to decrease the stepsize (phase 2),\n\nConstructor\n\nNonmonotoneLinesearchStepsize(M::AbstractManifold; kwargs...)\n\nKeyword arguments\n\np=allocate_result(M, rand): to store an interim result\ninitial_stepsize=1.0\nmemory_size=10\nbb_min_stepsize=1e-3\nbb_max_stepsize=1e3\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstrategy=direct\nstorage=[StoreStateAction](@ref)(M; store_fields=[:Iterate, :Gradient])``\nstepsize_reduction=0.5\nsufficient_decrease=1e-4\nstop_when_stepsize_less=0.0\nstop_when_stepsize_exceeds=max_stepsize(M, p))\nstop_increasing_at_step=100\nstop_decreasing_at_step=1000\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.PolyakStepsize","page":"Stepsize","title":"Manopt.PolyakStepsize","text":"PolyakStepsize <: Stepsize\n\nA functor (problem, state, ...) -> s to provide a step size due to Polyak, cf. Section 3.2 of [Ber15].\n\nFields\n\nγ : a function k -> ... representing a seuqnce.\nbest_cost_value : storing the best cost value\n\nConstructor\n\nPolyakStepsize(;\n γ = i -> 1/i,\n initial_cost_estimate=0.0\n)\n\nConstruct a stepsize of Polyak type.\n\nSee also\n\nPolyak\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.WolfePowellBinaryLinesearchStepsize","page":"Stepsize","title":"Manopt.WolfePowellBinaryLinesearchStepsize","text":"WolfePowellBinaryLinesearchStepsize{R} <: Linesearch\n\nDo a backtracking line search to find a step size α that fulfils the Wolfe conditions along a search direction X starting from p. See WolfePowellBinaryLinesearch for the math details.\n\nFields\n\nsufficient_decrease::R, sufficient_curvature::R two constants in the line search\nlast_stepsize::R\nmax_stepsize::R\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less::R: a safeguard to stop when the stepsize gets too small\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nKeyword arguments\n\nsufficient_decrease=10^(-4)\nsufficient_curvature=0.999\nmax_stepsize=max_stepsize(M, p): largest stepsize allowed here.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.WolfePowellLinesearchStepsize","page":"Stepsize","title":"Manopt.WolfePowellLinesearchStepsize","text":"WolfePowellLinesearchStepsize{R<:Real} <: Linesearch\n\nDo a backtracking line search to find a step size α that fulfils the Wolfe conditions along a search direction X starting from p. See WolfePowellLinesearch for the math details\n\nFields\n\nsufficient_decrease::R, sufficient_curvature::R two constants in the line search\ncandidate_direction::T: a tangent vector at the point p on the manifold mathcal M\ncandidate_point::P: a point on the manifold mathcal Mas temporary storage for candidates\ncandidate_tangent::T: a tangent vector at the point p on the manifold mathcal M\nlast_stepsize::R\nmax_stepsize::R\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less::R: a safeguard to stop when the stepsize gets too small\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nKeyword arguments\n\nsufficient_decrease=10^(-4)\nsufficient_curvature=0.999\np::P: a point on the manifold mathcal Mas temporary storage for candidates\nX::T: a tangent vector at the point p on the manifold mathcal Mas type of memory allocated for the candidates direction and tangent\nmax_stepsize=max_stepsize(M, p): largest stepsize allowed here.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Some solvers have a different iterate from the one used for the line search. Then the following state can be used to wrap these locally","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"StepsizeState","category":"page"},{"location":"plans/stepsize/#Manopt.StepsizeState","page":"Stepsize","title":"Manopt.StepsizeState","text":"StepsizeState{P,T} <: AbstractManoptSolverState\n\nA state to store a point and a descent direction used within a linesearch, if these are different from the iterate and search direction of the main solver.\n\nFields\n\np::P: a point on a manifold\nX::T: a tangent vector at p.\n\nConstructor\n\nStepsizeState(p,X)\nStepsizeState(M::AbstractManifold; p=rand(M), x=zero_vector(M,p)\n\nSee also\n\ninterior_point_Newton\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Literature","page":"Stepsize","title":"Literature","text":"","category":"section"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"D. P. Bertsekas. Convex Optimization Algorithms (Athena Scientific, 2015); p. 576.\n\n\n\nN. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\nG. N. Grapiglia and G. F. Stella. An Adaptive Riemannian Gradient Method Without Function Evaluations. Journal of Optimization Theory and Applications 197, 1140–1160 (2023).\n\n\n\nW. Huang. Optimization algorithms on Riemannian manifolds with applications. Ph.D. Thesis, Flordia State University (2014).\n\n\n\nB. Iannazzo and M. Porcelli. The Riemannian Barzilai–Borwein method with nonmonotone line search and the matrix geometric mean computation. IMA Journal of Numerical Analysis 38, 495–517 (2017).\n\n\n\nJ. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).\n\n\n\n","category":"page"},{"location":"#Welcome-to-Manopt.jl","page":"Home","title":"Welcome to Manopt.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = Manopt","category":"page"},{"location":"","page":"Home","title":"Home","text":"Manopt.Manopt","category":"page"},{"location":"#Manopt.Manopt","page":"Home","title":"Manopt.Manopt","text":"🏔️ Manopt.jl: optimization on Manifolds in Julia.\n\n📚 Documentation: manoptjl.org\n📦 Repository: github.com/JuliaManifolds/Manopt.jl\n💬 Discussions: github.com/JuliaManifolds/Manopt.jl/discussions\n🎯 Issues: github.com/JuliaManifolds/Manopt.jl/issues\n\n\n\n\n\n","category":"module"},{"location":"","page":"Home","title":"Home","text":"For a function fmathcal M ℝ defined on a Riemannian manifold mathcal M algorithms in this package aim to solve","category":"page"},{"location":"","page":"Home","title":"Home","text":"operatorname*argmin_p mathcal M f(p)","category":"page"},{"location":"","page":"Home","title":"Home","text":"or in other words: find the point p on the manifold, where f reaches its minimal function value.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Manopt.jl provides a framework for optimization on manifolds as well as a Library of optimization algorithms in Julia. It belongs to the “Manopt family”, which includes Manopt (Matlab) and pymanopt.org (Python).","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you want to delve right into Manopt.jl read the 🏔️ Get started: optimize. tutorial.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Manopt.jl makes it easy to use an algorithm for your favourite manifold as well as a manifold for your favourite algorithm. It already provides many manifolds and algorithms, which can easily be enhanced, for example to record certain data or debug output throughout iterations.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you use Manopt.jlin your work, please cite the following","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{Bergmann2022,\n Author = {Ronny Bergmann},\n Doi = {10.21105/joss.03866},\n Journal = {Journal of Open Source Software},\n Number = {70},\n Pages = {3866},\n Publisher = {The Open Journal},\n Title = {Manopt.jl: Optimization on Manifolds in {J}ulia},\n Volume = {7},\n Year = {2022},\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"To refer to a certain version or the source code in general cite for example","category":"page"},{"location":"","page":"Home","title":"Home","text":"@software{manoptjl-zenodo-mostrecent,\n Author = {Ronny Bergmann},\n Copyright = {MIT License},\n Doi = {10.5281/zenodo.4290905},\n Publisher = {Zenodo},\n Title = {Manopt.jl},\n Year = {2024},\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"for the most recent version or a corresponding version specific DOI, see the list of all versions.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you are also using Manifolds.jl please consider to cite","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{AxenBaranBergmannRzecki:2023,\n AUTHOR = {Axen, Seth D. and Baran, Mateusz and Bergmann, Ronny and Rzecki, Krzysztof},\n ARTICLENO = {33},\n DOI = {10.1145/3618296},\n JOURNAL = {ACM Transactions on Mathematical Software},\n MONTH = {dec},\n NUMBER = {4},\n TITLE = {Manifolds.Jl: An Extensible Julia Framework for Data Analysis on Manifolds},\n VOLUME = {49},\n YEAR = {2023}\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Note that both citations are in BibLaTeX format.","category":"page"},{"location":"#Main-features","page":"Home","title":"Main features","text":"","category":"section"},{"location":"#Optimization-algorithms-(solvers)","page":"Home","title":"Optimization algorithms (solvers)","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"For every optimization algorithm, a solver is implemented based on a AbstractManoptProblem that describes the problem to solve and its AbstractManoptSolverState that set up the solver, and stores values that are required between or for the next iteration. Together they form a plan.","category":"page"},{"location":"#Manifolds","page":"Home","title":"Manifolds","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This project is build upon ManifoldsBase.jl, a generic interface to implement manifolds. Certain functions are extended for specific manifolds from Manifolds.jl, but all other manifolds from that package can be used here, too.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The notation in the documentation aims to follow the same notation from these packages.","category":"page"},{"location":"#Visualization","page":"Home","title":"Visualization","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"To visualize and interpret results, Manopt.jl aims to provide both easy plot functions as well as exports. Furthermore a system to get debug during the iterations of an algorithms as well as record capabilities, for example to record a specified tuple of values per iteration, most prominently RecordCost and RecordIterate. Take a look at the 🏔️ Get started: optimize. tutorial on how to easily activate this.","category":"page"},{"location":"#Literature","page":"Home","title":"Literature","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"If you want to get started with manifolds, one book is [Car92], and if you want do directly dive into optimization on manifolds, good references are [AMS08] and [Bou23], which are both available online for free","category":"page"},{"location":"","page":"Home","title":"Home","text":"P.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.\n\n\n\nN. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\nM. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.\n\n\n\n","category":"page"},{"location":"references/#Literature","page":"References","title":"Literature","text":"","category":"section"},{"location":"references/","page":"References","title":"References","text":"This is all literature mentioned / referenced in the Manopt.jl documentation. Usually you find a small reference section at the end of every documentation page that contains the corresponding references as well.","category":"page"},{"location":"references/","page":"References","title":"References","text":"P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).\n\n\n\nP.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.\n\n\n\nS. Adachi, T. Okuno and A. Takeda. Riemannian Levenberg-Marquardt Method with Global and Local Convergence Properties. ArXiv Preprint (2022).\n\n\n\nN. Agarwal, N. Boumal, B. Bullins and C. Cartis. Adaptive regularization with cubics on manifolds. Mathematical Programming (2020).\n\n\n\nY. T. Almeida, J. X. Cruz Neto, P. R. Oliveira and J. C. Oliveira Souza. A modified proximal point method for DC functions on Hadamard manifolds. Computational Optimization and Applications 76, 649–673 (2020).\n\n\n\nM. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.\n\n\n\nE. M. Beale. A derivation of conjugate gradients. In: Numerical methods for nonlinear optimization, edited by F. A. Lootsma (Academic Press, London, London, 1972); pp. 39–43.\n\n\n\nR. Bergmann, O. P. Ferreira, E. M. Santos and J. C. Souza. The difference of convex algorithm on Hadamard manifolds, arXiv preprint (2023).\n\n\n\nR. Bergmann and P.-Y. Gousenbourger. A variational model for data fitting on manifolds by minimizing the acceleration of a Bézier curve. Frontiers in Applied Mathematics and Statistics 4 (2018), arXiv:1807.10090.\n\n\n\nR. Bergmann and R. Herzog. Intrinsic formulation of KKT conditions and constraint qualifications on smooth manifolds. SIAM Journal on Optimization 29, 2423–2444 (2019), arXiv:1804.06214.\n\n\n\nR. Bergmann, R. Herzog and H. Jasa. The Riemannian Convex Bundle Method, preprint (2024), arXiv:2402.13670.\n\n\n\nR. Bergmann, R. Herzog, M. Silva Louzeiro, D. Tenbrinck and J. Vidal-Núñez. Fenchel duality theory and a primal-dual algorithm on Riemannian manifolds. Foundations of Computational Mathematics 21, 1465–1504 (2021), arXiv:1908.02022.\n\n\n\nR. Bergmann, J. Persch and G. Steidl. A parallel Douglas Rachford algorithm for minimizing ROF-like functionals on images with values in symmetric Hadamard manifolds. SIAM Journal on Imaging Sciences 9, 901–937 (2016), arXiv:1512.02814.\n\n\n\nD. P. Bertsekas. Convex Optimization Algorithms (Athena Scientific, 2015); p. 576.\n\n\n\nP. B. Borckmans, M. Ishteva and P.-A. Absil. A Modified Particle Swarm Optimization Algorithm for the Best Low Multilinear Rank Approximation of Higher-Order Tensors. In: 7th International Conference on Swarm INtelligence (Springer Berlin Heidelberg, 2010); pp. 13–23.\n\n\n\nN. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\nM. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.\n\n\n\nA. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40, 120–145 (2011).\n\n\n\nS. Colutto, F. Fruhauf, M. Fuchs and O. Scherzer. The CMA-ES on Riemannian Manifolds to Reconstruct Shapes in 3-D Voxel Images. IEEE Transactions on Evolutionary Computation 14, 227–245 (2010).\n\n\n\nA. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).\n\n\n\nY. H. Dai and Y. Yuan. A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM Journal on Optimization 10, 177–182 (1999).\n\n\n\nW. Diepeveen and J. Lellmann. An Inexact Semismooth Newton Method on Riemannian Manifolds with Application to Duality-Based Total Variation Denoising. SIAM Journal on Imaging Sciences 14, 1565–1600 (2021), arXiv:2102.10309.\n\n\n\nA. S. El-Bakry, R. A. Tapia, T. Tsuchiya and Y. Zhang. On the formulation and theory of the Newton interior-point method for nonlinear programming. Journal of Optimization Theory and Applications 89, 507–541 (1996).\n\n\n\nO. Ferreira and P. R. Oliveira. Subgradient algorithm on Riemannian manifolds. Journal of Optimization Theory and Applications 97, 93–104 (1998).\n\n\n\nO. Ferreira and P. R. Oliveira. Proximal point algorithm on Riemannian manifolds. Optimization. A Journal of Mathematical Programming and Operations Research 51, 257–270 (2002).\n\n\n\nP. T. Fletcher. Geodesic regression and the theory of least squares on Riemannian manifolds. International Journal of Computer Vision 105, 171–185 (2013).\n\n\n\nR. Fletcher. Practical Methods of Optimization. 2 Edition, A Wiley-Interscience Publication (John Wiley & Sons Ltd., 1987).\n\n\n\nR. Fletcher and C. M. Reeves. Function minimization by conjugate gradients. The Computer Journal 7, 149–154 (1964).\n\n\n\nG. N. Grapiglia and G. F. Stella. An Adaptive Riemannian Gradient Method Without Function Evaluations. Journal of Optimization Theory and Applications 197, 1140–1160 (2023).\n\n\n\nW. W. Hager and H. Zhang. A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization 2, 35–58 (2006).\n\n\n\nW. W. Hager and H. Zhang. A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search. SIAM Journal on Optimization 16, 170–192 (2005).\n\n\n\nN. Hansen. The CMA Evolution Strategy: A Tutorial. ArXiv Preprint (2023).\n\n\n\nM. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards 49, 409 (1952).\n\n\n\nN. Hoseini Monjezi, S. Nobakhtian and M. R. Pouryayevali. A proximal bundle algorithm for nonsmooth optimization on Riemannian manifolds. IMA Journal of Numerical Analysis 43, 293–325 (2023).\n\n\n\nW. Huang. Optimization algorithms on Riemannian manifolds with applications. Ph.D. Thesis, Flordia State University (2014).\n\n\n\nW. Huang, P.-A. Absil and K. A. Gallivan. A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems. SIAM Journal on Optimization 28, 470–495 (2018).\n\n\n\nW. Huang, K. A. Gallivan and P.-A. Absil. A Broyden class of quasi-Newton methods for Riemannian optimization. SIAM Journal on Optimization 25, 1660–1685 (2015).\n\n\n\nB. Iannazzo and M. Porcelli. The Riemannian Barzilai–Borwein method with nonmonotone line search and the matrix geometric mean computation. IMA Journal of Numerical Analysis 38, 495–517 (2017).\n\n\n\nH. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).\n\n\n\nZ. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.\n\n\n\nC. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.\n\n\n\nY. Liu and C. Storey. Efficient generalized conjugate gradient algorithms, part 1: Theory. Journal of Optimization Theory and Applications 69, 129–137 (1991).\n\n\n\nD. Nguyen. Operator-Valued Formulas for Riemannian Gradient and Hessian and Families of Tractable Metrics in Riemannian Optimization. Journal of Optimization Theory and Applications 198, 135–164 (2023), arXiv:2009.10159.\n\n\n\nJ. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).\n\n\n\nR. Peeters. On a Riemannian version of the Levenberg-Marquardt algorithm. Serie Research Memoranda 0011 (VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics, 1993).\n\n\n\nE. Polak and G. Ribière. Note sur la convergence de méthodes de directions conjuguées. Revue française d’informatique et de recherche opérationnelle 3, 35–43 (1969).\n\n\n\nM. J. Powell. Restart procedures for the conjugate gradient method. Mathematical Programming 12, 241–254 (1977).\n\n\n\nJ. C. Souza and P. R. Oliveira. A proximal point algorithm for DC fuctions on Hadamard manifolds. Journal of Global Optimization 63, 797–810 (2015).\n\n\n\nM. Weber and S. Sra. Riemannian Optimization via Frank-Wolfe Methods. Mathematical Programming 199, 525–556 (2022).\n\n\n\nH. Zhang and S. Sra. Towards Riemannian accelerated gradient methods, arXiv Preprint, 1806.02812 (2018).\n\n\n\n","category":"page"},{"location":"tutorials/StochasticGradientDescent/#How-to-run-stochastic-gradient-descent","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"","category":"section"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"This tutorial illustrates how to use the stochastic_gradient_descent solver and different DirectionUpdateRules to introduce the average or momentum variant, see Stochastic Gradient Descent.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"Computationally, we look at a very simple but large scale problem, the Riemannian Center of Mass or Fréchet mean: for given points p_i mathcal M, i=1N this optimization problem reads","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"operatorname*argmin_xmathcal M frac12sum_i=1^N\n operatornamed^2_mathcal M(xp_i)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"which of course can be (and is) solved by a gradient descent, see the introductory tutorial or Statistics in Manifolds.jl. If N is very large, evaluating the complete gradient might be quite expensive. A remedy is to evaluate only one of the terms at a time and choose a random order for these.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"We first initialize the packages","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"using Manifolds, Manopt, Random, BenchmarkTools, ManifoldDiff\nusing ManifoldDiff: grad_distance\nRandom.seed!(42);","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"We next generate a (little) large(r) data set","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"n = 5000\nσ = π / 12\nM = Sphere(2)\np = 1 / sqrt(2) * [1.0, 0.0, 1.0]\ndata = [exp(M, p, σ * rand(M; vector_at=p)) for i in 1:n];","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"Note that due to the construction of the points as zero mean tangent vectors, the mean should be very close to our initial point p.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"In order to use the stochastic gradient, we now need a function that returns the vector of gradients. There are two ways to define it in Manopt.jl: either as a single function that returns a vector, or as a vector of functions.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"The first variant is of course easier to define, but the second is more efficient when only evaluating one of the gradients.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"For the mean, the gradient is","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"operatornamegradf(p) = sum_i=1^N operatornamegradf_i(x) quad textwhere operatornamegradf_i(x) = -log_x p_i","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"which we define in Manopt.jl in two different ways: either as one function returning all gradients as a vector (see gradF), or, maybe more fitting for a large scale problem, as a vector of small gradient functions (see gradf)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"F(M, p) = 1 / (2 * n) * sum(map(q -> distance(M, p, q)^2, data))\ngradF(M, p) = [grad_distance(M, p, q) for q in data]\ngradf = [(M, p) -> grad_distance(M, q, p) for q in data];\np0 = 1 / sqrt(3) * [1.0, 1.0, 1.0]","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n 0.5773502691896258\n 0.5773502691896258\n 0.5773502691896258","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"The calls are only slightly different, but notice that accessing the second gradient element requires evaluating all logs in the first function, while we only call one of the functions in the second array of functions. So while you can use both gradF and gradf in the following call, the second one is (much) faster:","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"p_opt1 = stochastic_gradient_descent(M, gradF, p)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n -0.4124602512237471\n 0.7450900936719854\n 0.38494647999455556","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"@benchmark stochastic_gradient_descent($M, $gradF, $p0)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"BenchmarkTools.Trial: 1 sample with 1 evaluation.\n Single result which took 6.465 s (7.85% GC) to evaluate,\n with a memory estimate of 7.83 GiB, over 200213003 allocations.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"p_opt2 = stochastic_gradient_descent(M, gradf, p0)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n 0.6828818855405705\n 0.17545293717581142\n 0.7091463863243863","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"@benchmark stochastic_gradient_descent($M, $gradf, $p0)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"BenchmarkTools.Trial: 2571 samples with 1 evaluation.\n Range (min … max): 615.879 μs … 14.639 ms ┊ GC (min … max): 0.00% … 69.36%\n Time (median): 1.605 ms ┊ GC (median): 0.00%\n Time (mean ± σ): 1.943 ms ± 1.134 ms ┊ GC (mean ± σ): 6.08% ± 11.80%\n\n ▁ █ \n ███▇██▆█▆▇▆▅▆▄▅▅▃▄▄▄▃▃▃▃▃▂▄▂▂▃▃▂▃█▅▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ ▂\n 616 μs Histogram: frequency by time 5.44 ms <\n\n Memory estimate: 861.16 KiB, allocs estimate: 20050.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"This result is reasonably close. But we can improve it by using a DirectionUpdateRule, namely:","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"On the one hand MomentumGradient, which requires both the manifold and the initial value, to keep track of the iterate and parallel transport the last direction to the current iterate. The necessary vector_transport_method keyword is set to a suitable default on every manifold, see default_vector_transport_method. We get ““”","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"p_opt3 = stochastic_gradient_descent(\n M, gradf, p0; direction=MomentumGradient(; direction=StochasticGradient())\n)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n 0.375215361477979\n -0.026495079681491125\n 0.9265589259532395","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"MG = MomentumGradient(; direction=StochasticGradient());\n@benchmark stochastic_gradient_descent($M, $gradf, p=$p0; direction=$MG)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"BenchmarkTools.Trial: 833 samples with 1 evaluation.\n Range (min … max): 5.293 ms … 17.501 ms ┊ GC (min … max): 0.00% … 49.91%\n Time (median): 5.421 ms ┊ GC (median): 0.00%\n Time (mean ± σ): 6.001 ms ± 1.234 ms ┊ GC (mean ± σ): 8.16% ± 12.26%\n\n ▆█▆▂ ▂▃▂▁ \n ████▆▁▄▄▁▆▁▄▄▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁██████▆█▁▄▅▁▅▁▁▄▄▅▇▅▁▄▁▄▁▄▁▅ ▇\n 5.29 ms Histogram: log(frequency) by time 9.56 ms <\n\n Memory estimate: 7.71 MiB, allocs estimate: 200052.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"And on the other hand the AverageGradient computes an average of the last n gradients. This is done by","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"p_opt4 = stochastic_gradient_descent(\n M, gradf, p0; direction=AverageGradient(; n=10, direction=StochasticGradient()), debug=[],\n)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n -0.5636278115277376\n 0.646536380066075\n -0.5141151615382582","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"AG = AverageGradient(; n=10, direction=StochasticGradient(M));\n@benchmark stochastic_gradient_descent($M, $gradf, p=$p0; direction=$AG, debug=[])","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"BenchmarkTools.Trial: 238 samples with 1 evaluation.\n Range (min … max): 18.884 ms … 40.784 ms ┊ GC (min … max): 0.00% … 27.49%\n Time (median): 19.774 ms ┊ GC (median): 0.00%\n Time (mean ± σ): 21.016 ms ± 2.719 ms ┊ GC (mean ± σ): 7.33% ± 7.23%\n\n █▇ ▄▇▃ ▂ \n ███▆▄▁▁▁▁▁▁█████▁▁▁▁▁▄▁▄▄▄▁▄▁▁▁▁▄▁▁▁▁▁▄▄▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▄ ▆\n 18.9 ms Histogram: log(frequency) by time 34.3 ms <\n\n Memory estimate: 21.90 MiB, allocs estimate: 600077.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"Note that the default StoppingCriterion is a fixed number of iterations which helps the comparison here.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"For both update rules we have to internally specify that we are still in the stochastic setting, since both rules can also be used with the IdentityUpdateRule within gradient_descent.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"For this not-that-large-scale example we can of course also use a gradient descent with ArmijoLinesearch,","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"fullGradF(M, p) = 1/n*sum(grad_distance(M, q, p) for q in data)\np_opt5 = gradient_descent(M, F, fullGradF, p0; stepsize=ArmijoLinesearch())","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n 0.7050420977039097\n -0.006374163035874202\n 0.7091368066253959","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"but in general it is expected to be a bit slow.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"AL = ArmijoLinesearch();\n@benchmark gradient_descent($M, $F, $fullGradF, $p0; stepsize=$AL)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"BenchmarkTools.Trial: 25 samples with 1 evaluation.\n Range (min … max): 202.667 ms … 223.306 ms ┊ GC (min … max): 6.49% … 4.71%\n Time (median): 205.968 ms ┊ GC (median): 7.59%\n Time (mean ± σ): 207.513 ms ± 4.955 ms ┊ GC (mean ± σ): 7.56% ± 0.91%\n\n █▁▁▁▁▁ ████ █▁ ▁ ▁ ▁▁ ▁ ▁ ▁ \n ██████▁████▁██▁▁█▁▁▁▁▁▁█▁██▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁█ ▁\n 203 ms Histogram: frequency by time 223 ms <\n\n Memory estimate: 230.56 MiB, allocs estimate: 6338502.","category":"page"},{"location":"tutorials/StochasticGradientDescent/#Technical-details","page":"How to run stochastic gradient descent","title":"Technical details","text":"","category":"section"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `..`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"2024-11-21T20:40:54.968","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"EditURL = \"https://github.com/JuliaManifolds/Manopt.jl/blob/master/CONTRIBUTING.md\"","category":"page"},{"location":"contributing/#Contributing-to-Manopt.jl","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"First, thanks for taking the time to contribute. Any contribution is appreciated and welcome.","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"The following is a set of guidelines to Manopt.jl.","category":"page"},{"location":"contributing/#Table-of-contents","page":"Contributing to Manopt.jl","title":"Table of contents","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Contributing to Manopt.jl - Table of Contents\nI just have a question\nHow can I file an issue?\nHow can I contribute?\nAdd a missing method\nProvide a new algorithm\nProvide a new example\nCode style","category":"page"},{"location":"contributing/#I-just-have-a-question","page":"Contributing to Manopt.jl","title":"I just have a question","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"The developer can most easily be reached in the Julia Slack channel #manifolds. You can apply for the Julia Slack workspace here if you haven't joined yet. You can also ask your question on discourse.julialang.org.","category":"page"},{"location":"contributing/#How-can-I-file-an-issue?","page":"Contributing to Manopt.jl","title":"How can I file an issue?","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"If you found a bug or want to propose a feature, please open an issue in within the GitHub repository.","category":"page"},{"location":"contributing/#How-can-I-contribute?","page":"Contributing to Manopt.jl","title":"How can I contribute?","text":"","category":"section"},{"location":"contributing/#Add-a-missing-method","page":"Contributing to Manopt.jl","title":"Add a missing method","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"There is still a lot of methods for within the optimization framework of Manopt.jl, may it be functions, gradients, differentials, proximal maps, step size rules or stopping criteria. If you notice a method missing and can contribute an implementation, please do so, and the maintainers try help with the necessary details. Even providing a single new method is a good contribution.","category":"page"},{"location":"contributing/#Provide-a-new-algorithm","page":"Contributing to Manopt.jl","title":"Provide a new algorithm","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"A main contribution you can provide is another algorithm that is not yet included in the package. An algorithm is always based on a concrete type of a AbstractManoptProblem storing the main information of the task and a concrete type of an AbstractManoptSolverState storing all information that needs to be known to the solver in general. The actual algorithm is split into an initialization phase, see initialize_solver!, and the implementation of the ith step of the solver itself, see before the iterative procedure, see step_solver!. For these two functions, it would be great if a new algorithm uses functions from the ManifoldsBase.jl interface as generically as possible. For example, if possible use retract!(M,q,p,X) in favor of exp!(M,q,p,X) to perform a step starting in p in direction X (in place of q), since the exponential map might be too expensive to evaluate or might not be available on a certain manifold. See Retractions and inverse retractions for more details. Further, if possible, prefer retract!(M,q,p,X) in favor of retract(M,p,X), since a computation in place of a suitable variable q reduces memory allocations.","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Usually, the methods implemented in Manopt.jl also have a high-level interface, that is easier to call, creates the necessary problem and options structure and calls the solver.","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"The two technical functions initialize_solver! and step_solver! should be documented with technical details, while the high level interface should usually provide a general description and some literature references to the algorithm at hand.","category":"page"},{"location":"contributing/#Provide-a-new-example","page":"Contributing to Manopt.jl","title":"Provide a new example","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Example problems are available at ManoptExamples.jl, where also their reproducible Quarto-Markdown files are stored.","category":"page"},{"location":"contributing/#Code-style","page":"Contributing to Manopt.jl","title":"Code style","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Try to follow the documentation guidelines from the Julia documentation as well as Blue Style. Run JuliaFormatter.jl on the repository in the way set in the .JuliaFormatter.toml file, which enforces a number of conventions consistent with the Blue Style. Furthermore vale is run on both Markdown and code files, affecting documentation and source code comments","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Please follow a few internal conventions:","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"It is preferred that the AbstractManoptProblem's struct contains information about the general structure of the problem.\nAny implemented function should be accompanied by its mathematical formulae if a closed form exists.\nAbstractManoptProblem and helping functions are stored within the plan/ folder and sorted by properties of the problem and/or solver at hand.\nthe solver state is usually stored with the solver itself\nWithin the source code of one algorithm, following the state, the high level interface should be next, then the initialization, then the step.\nOtherwise an alphabetical order of functions is preferable.\nThe preceding implies that the mutating variant of a function follows the non-mutating variant.\nThere should be no dangling = signs.\nAlways add a newline between things of different types (struct/method/const).\nAlways add a newline between methods for different functions (including mutating/nonmutating variants).\nPrefer to have no newline between methods for the same function; when reasonable, merge the documentation strings.\nAll import/using/include should be in the main module file.","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Concerning documentation","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"if possible provide both mathematical formulae and literature references using DocumenterCitations.jl and BibTeX where possible\nAlways document all input variables and keyword arguments","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"If you implement an algorithm with a certain numerical example in mind, it would be great, if this could be added to the ManoptExamples.jl package as well.","category":"page"},{"location":"helpers/checks/#Verifying-gradients-and-Hessians","page":"Checks","title":"Verifying gradients and Hessians","text":"","category":"section"},{"location":"helpers/checks/","page":"Checks","title":"Checks","text":"If you have computed a gradient or differential and you are not sure whether it is correct.","category":"page"},{"location":"helpers/checks/","page":"Checks","title":"Checks","text":"Modules = [Manopt]\nPages = [\"checks.jl\"]","category":"page"},{"location":"helpers/checks/#Manopt.check_Hessian","page":"Checks","title":"Manopt.check_Hessian","text":"check_Hessian(M, f, grad_f, Hess_f, p=rand(M), X=rand(M; vector_at=p), Y=rand(M, vector_at=p); kwargs...)\n\nVerify numerically whether the Hessian Hess_f(M,p, X) of f(M,p) is correct.\n\nFor this either a second-order retraction or a critical point p of f is required. The approximation is then\n\nf(operatornameretr_p(tX)) = f(p) + toperatornamegrad f(p) X + fract^22operatornameHessf(p)X X + mathcal O(t^3)\n\nor in other words, that the error between the function f and its second order Taylor behaves in error mathcal O(t^3), which indicates that the Hessian is correct, cf. also [Bou23, Section 6.8].\n\nNote that if the errors are below the given tolerance and the method is exact, no plot is generated.\n\nKeyword arguments\n\ncheck_grad=true: verify that operatornamegradf(p) T_pmathcal M.\ncheck_linearity=true: verify that the Hessian is linear, see is_Hessian_linear using a, b, X, and Y\ncheck_symmetry=true: verify that the Hessian is symmetric, see is_Hessian_symmetric\ncheck_vector=false: verify that \\operatorname{Hess} f(p)[X] ∈ T_{p}\\mathcal Musingis_vector`.\nmode=:Default: specify the mode for the verification; the default assumption is, that the retraction provided is of second order. Otherwise one can also verify the Hessian if the point p is a critical point. THen set the mode to :CritalPoint to use gradient_descent to find a critical point. Note: this requires (and evaluates) new tangent vectors X and Y\natol, rtol: (same defaults as isapprox) tolerances that are passed down to all checks\na, b two real values to verify linearity of the Hessian (if check_linearity=true)\nN=101: number of points to verify within the log_range default range 10^-810^0\nexactness_tol=1e-12: if all errors are below this tolerance, the verification is considered to be exact\nio=nothing: provide an IO to print the result to\ngradient=grad_f(M, p): instead of the gradient function you can also provide the gradient at p directly\nHessian=Hess_f(M, p, X): instead of the Hessian function you can provide the result of operatornameHess f(p)X directly. Note that evaluations of the Hessian might still be necessary for checking linearity and symmetry and/or when using :CriticalPoint mode.\nlimits=(1e-8,1): specify the limits in the log_range\nlog_range=range(limits[1], limits[2]; length=N): specify the range of points (in log scale) to sample the Hessian line\nN=101: number of points to use within the log_range default range 10^-810^0\nplot=false: whether to plot the resulting verification (requires Plots.jl to be loaded). The plot is in log-log-scale. This is returned and can then also be saved.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nslope_tol=0.1: tolerance for the slope (global) of the approximation\nerror=:none: how to handle errors, possible values: :error, :info, :warn\nwindow=nothing: specify window sizes within the log_range that are used for the slope estimation. the default is, to use all window sizes 2:N.\n\nThe kwargs... are also passed down to the check_vector and the check_gradient call, such that tolerances can easily be set.\n\nWhile check_vector is also passed to the inner call to check_gradient as well as the retraction_method, this inner check_gradient is meant to be just for inner verification, so it does not throw an error nor produce a plot itself.\n\n\n\n\n\n","category":"function"},{"location":"helpers/checks/#Manopt.check_differential","page":"Checks","title":"Manopt.check_differential","text":"check_differential(M, F, dF, p=rand(M), X=rand(M; vector_at=p); kwargs...)\n\nCheck numerically whether the differential dF(M,p,X) of F(M,p) is correct.\n\nThis implements the method described in [Bou23, Section 4.8].\n\nNote that if the errors are below the given tolerance and the method is exact, no plot is generated,\n\nKeyword arguments\n\nexactness_tol=1e-12: if all errors are below this tolerance, the differential is considered to be exact\nio=nothing: provide an IO to print the result to\nlimits=(1e-8,1): specify the limits in the log_range\nlog_range=range(limits[1], limits[2]; length=N): specify the range of points (in log scale) to sample the differential line\nN=101: number of points to verify within the log_range default range 10^-810^0\nname=\"differential\": name to display in the plot\nplot=false: whether to plot the result (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nslope_tol=0.1: tolerance for the slope (global) of the approximation\nthrow_error=false: throw an error message if the differential is wrong\nwindow=nothing: specify window sizes within the log_range that are used for the slope estimation. The default is, to use all window sizes 2:N.\n\n\n\n\n\n","category":"function"},{"location":"helpers/checks/#Manopt.check_gradient","page":"Checks","title":"Manopt.check_gradient","text":"check_gradient(M, f, grad_f, p=rand(M), X=rand(M; vector_at=p); kwargs...)\n\nVerify numerically whether the gradient grad_f(M,p) of f(M,p) is correct, that is whether\n\nf(operatornameretr_p(tX)) = f(p) + toperatornamegrad f(p) X + mathcal O(t^2)\n\nor in other words, that the error between the function f and its first order Taylor behaves in error mathcal O(t^2), which indicates that the gradient is correct, cf. also [Bou23, Section 4.8].\n\nNote that if the errors are below the given tolerance and the method is exact, no plot is generated.\n\nKeyword arguments\n\ncheck_vector=true: verify that operatornamegradf(p) T_pmathcal M using is_vector.\nexactness_tol=1e-12: if all errors are below this tolerance, the gradient is considered to be exact\nio=nothing: provide an IO to print the result to\ngradient=grad_f(M, p): instead of the gradient function you can also provide the gradient at p directly\nlimits=(1e-8,1): specify the limits in the log_range\nlog_range=range(limits[1], limits[2]; length=N):\nspecify the range of points (in log scale) to sample the gradient line\nN=101: number of points to verify within the log_range default range 10^-810^0\nplot=false: whether to plot the result (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nslope_tol=0.1: tolerance for the slope (global) of the approximation\natol=:none`:\n\naults as=nothing: hat are passed down toisvectorifcheckvectoris set totrue`\n\nerror=:none: how to handle errors, possible values: :error, :info, :warn\nwindow=nothing: specify window sizes within the log_range that are used for the slope estimation. the default is, to use all window sizes 2:N.\n\nThe remaining keyword arguments are also passed down to the check_vector call, such that tolerances can easily be set.\n\n\n\n\n\n","category":"function"},{"location":"helpers/checks/#Manopt.is_Hessian_linear","page":"Checks","title":"Manopt.is_Hessian_linear","text":"is_Hessian_linear(M, Hess_f, p,\n X=rand(M; vector_at=p), Y=rand(M; vector_at=p), a=randn(), b=randn();\n error=:none, io=nothing, kwargs...\n)\n\nVerify whether the Hessian function Hess_f fulfills linearity,\n\noperatornameHess f(p)aX + bY = boperatornameHess f(p)X\n + boperatornameHess f(p)Y\n\nwhich is checked using isapprox and the keyword arguments are passed to this function.\n\nOptional arguments\n\nerror=:none: how to handle errors, possible values: :error, :info, :warn\n\n\n\n\n\n","category":"function"},{"location":"helpers/checks/#Manopt.is_Hessian_symmetric","page":"Checks","title":"Manopt.is_Hessian_symmetric","text":"is_Hessian_symmetric(M, Hess_f, p=rand(M), X=rand(M; vector_at=p), Y=rand(M; vector_at=p);\nerror=:none, io=nothing, atol::Real=0, rtol::Real=atol>0 ? 0 : √eps\n\n)\n\nVerify whether the Hessian function Hess_f fulfills symmetry, which means that\n\noperatornameHess f(p)X Y = X operatornameHess f(p)Y\n\nwhich is checked using isapprox and the kwargs... are passed to this function.\n\nOptional arguments\n\natol, rtol with the same defaults as the usual isapprox\nerror=:none: how to handle errors, possible values: :error, :info, :warn\n\n\n\n\n\n","category":"function"},{"location":"helpers/checks/#Literature","page":"Checks","title":"Literature","text":"","category":"section"},{"location":"helpers/checks/","page":"Checks","title":"Checks","text":"N. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\n","category":"page"},{"location":"solvers/difference_of_convex/#Difference-of-convex","page":"Difference of Convex","title":"Difference of convex","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/difference_of_convex/#solver-difference-of-convex","page":"Difference of Convex","title":"Difference of convex algorithm","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"difference_of_convex_algorithm\ndifference_of_convex_algorithm!","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.difference_of_convex_algorithm","page":"Difference of Convex","title":"Manopt.difference_of_convex_algorithm","text":"difference_of_convex_algorithm(M, f, g, ∂h, p=rand(M); kwargs...)\ndifference_of_convex_algorithm(M, mdco, p; kwargs...)\ndifference_of_convex_algorithm!(M, f, g, ∂h, p; kwargs...)\ndifference_of_convex_algorithm!(M, mdco, p; kwargs...)\n\nCompute the difference of convex algorithm [BFSS23] to minimize\n\n operatornameargmin_pmathcal M g(p) - h(p)\n\nwhere you need to provide f(p) = g(p) - h(p), g and the subdifferential h of h.\n\nThis algorithm performs the following steps given a start point p= p^(0). Then repeat for k=01\n\nTake X^(k) h(p^(k))\nSet the next iterate to the solution of the subproblem\n\n p^(k+1) operatornameargmin_q mathcal M g(q) - X^(k) log_p^(k)q\n\nuntil the stopping criterion (see the stopping_criterion keyword is fulfilled.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ngradient=nothing: specify operatornamegrad f, for debug / analysis or enhancing the stopping_criterion=\ngrad_g=nothing: specify the gradient of g. If specified, a subsolver is automatically set up.\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8): a functor indicating that the stopping criterion is fulfilled\ng=nothing: specify the function g If specified, a subsolver is automatically set up.\nsub_cost=LinearizedDCCost(g, p, initial_vector): a cost to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=LinearizedDCGrad(grad_g, p, initial_vector; evaluation=evaluation): gradient to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_stopping_criterion=StopAfterIteration(300)|StopWhenStepsizeLess(1e-9)|StopWhenGradientNormLess(1e-9): a stopping criterion used withing the default sub_state= This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_stepsize=ArmijoLinesearch(M)) specify a step size used within the sub_state. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/difference_of_convex/#Manopt.difference_of_convex_algorithm!","page":"Difference of Convex","title":"Manopt.difference_of_convex_algorithm!","text":"difference_of_convex_algorithm(M, f, g, ∂h, p=rand(M); kwargs...)\ndifference_of_convex_algorithm(M, mdco, p; kwargs...)\ndifference_of_convex_algorithm!(M, f, g, ∂h, p; kwargs...)\ndifference_of_convex_algorithm!(M, mdco, p; kwargs...)\n\nCompute the difference of convex algorithm [BFSS23] to minimize\n\n operatornameargmin_pmathcal M g(p) - h(p)\n\nwhere you need to provide f(p) = g(p) - h(p), g and the subdifferential h of h.\n\nThis algorithm performs the following steps given a start point p= p^(0). Then repeat for k=01\n\nTake X^(k) h(p^(k))\nSet the next iterate to the solution of the subproblem\n\n p^(k+1) operatornameargmin_q mathcal M g(q) - X^(k) log_p^(k)q\n\nuntil the stopping criterion (see the stopping_criterion keyword is fulfilled.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ngradient=nothing: specify operatornamegrad f, for debug / analysis or enhancing the stopping_criterion=\ngrad_g=nothing: specify the gradient of g. If specified, a subsolver is automatically set up.\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8): a functor indicating that the stopping criterion is fulfilled\ng=nothing: specify the function g If specified, a subsolver is automatically set up.\nsub_cost=LinearizedDCCost(g, p, initial_vector): a cost to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=LinearizedDCGrad(grad_g, p, initial_vector; evaluation=evaluation): gradient to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_stopping_criterion=StopAfterIteration(300)|StopWhenStepsizeLess(1e-9)|StopWhenGradientNormLess(1e-9): a stopping criterion used withing the default sub_state= This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_stepsize=ArmijoLinesearch(M)) specify a step size used within the sub_state. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/difference_of_convex/#solver-difference-of-convex-proximal-point","page":"Difference of Convex","title":"Difference of convex proximal point","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"difference_of_convex_proximal_point\ndifference_of_convex_proximal_point!","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.difference_of_convex_proximal_point","page":"Difference of Convex","title":"Manopt.difference_of_convex_proximal_point","text":"difference_of_convex_proximal_point(M, grad_h, p=rand(M); kwargs...)\ndifference_of_convex_proximal_point(M, mdcpo, p=rand(M); kwargs...)\ndifference_of_convex_proximal_point!(M, grad_h, p; kwargs...)\ndifference_of_convex_proximal_point!(M, mdcpo, p; kwargs...)\n\nCompute the difference of convex proximal point algorithm [SO15] to minimize\n\n operatornameargmin_pmathcal M g(p) - h(p)\n\nwhere you have to provide the subgradient h of h and either\n\nthe proximal map operatornameprox_λg of g as a function prox_g(M, λ, p) or prox_g(M, q, λ, p)\nthe functions g and grad_g to compute the proximal map using a sub solver\nyour own sub-solver, specified by sub_problem=and sub_state=\n\nThis algorithm performs the following steps given a start point p= p^(0). Then repeat for k=01\n\nX^(k) operatornamegrad h(p^(k))\nq^(k) = operatornameretr_p^(k)(λ_kX^(k))\nr^(k) = operatornameprox_λ_kg(q^(k))\nX^(k) = operatornameretr^-1_p^(k)(r^(k))\nCompute a stepsize s_k and\nset p^(k+1) = operatornameretr_p^(k)(s_kX^(k)).\n\nuntil the stopping_criterion is fulfilled.\n\nSee [ACOO20] for more details on the modified variant, where steps 4-6 are slightly changed, since here the classical proximal point method for DC functions is obtained for s_k = 1 and one can hence employ usual line search method.\n\nKeyword arguments\n\nλ: ( k -> 1/2 ) a function returning the sequence of prox parameters λ_k\ncost=nothing: provide the cost f, for debug reasons / analysis\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ngradient=nothing: specify operatornamegrad f, for debug / analysis or enhancing the stopping_criterion\nprox_g=nothing: specify a proximal map for the sub problem or both of the following\ng=nothing: specify the function g.\ngrad_g=nothing: specify the gradient of g. If both gand grad_g are specified, a subsolver is automatically set up.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=ConstantLength(): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8)): a functor indicating that the stopping criterion is fulfilled A StopWhenGradientNormLess(1e-8) is added with |, when a gradient is provided.\nsub_cost=ProximalDCCost(g, copy(M, p), λ(1))): cost to be used within the default sub_problem that is initialized as soon as g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=ProximalDCGrad(grad_g, copy(M, p), λ(1); evaluation=evaluation): gradient to be used within the default sub_problem, that is initialized as soon as grad_g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_stopping_criterion=(StopAfterIteration(300)|[StopWhenGradientNormLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/difference_of_convex/#Manopt.difference_of_convex_proximal_point!","page":"Difference of Convex","title":"Manopt.difference_of_convex_proximal_point!","text":"difference_of_convex_proximal_point(M, grad_h, p=rand(M); kwargs...)\ndifference_of_convex_proximal_point(M, mdcpo, p=rand(M); kwargs...)\ndifference_of_convex_proximal_point!(M, grad_h, p; kwargs...)\ndifference_of_convex_proximal_point!(M, mdcpo, p; kwargs...)\n\nCompute the difference of convex proximal point algorithm [SO15] to minimize\n\n operatornameargmin_pmathcal M g(p) - h(p)\n\nwhere you have to provide the subgradient h of h and either\n\nthe proximal map operatornameprox_λg of g as a function prox_g(M, λ, p) or prox_g(M, q, λ, p)\nthe functions g and grad_g to compute the proximal map using a sub solver\nyour own sub-solver, specified by sub_problem=and sub_state=\n\nThis algorithm performs the following steps given a start point p= p^(0). Then repeat for k=01\n\nX^(k) operatornamegrad h(p^(k))\nq^(k) = operatornameretr_p^(k)(λ_kX^(k))\nr^(k) = operatornameprox_λ_kg(q^(k))\nX^(k) = operatornameretr^-1_p^(k)(r^(k))\nCompute a stepsize s_k and\nset p^(k+1) = operatornameretr_p^(k)(s_kX^(k)).\n\nuntil the stopping_criterion is fulfilled.\n\nSee [ACOO20] for more details on the modified variant, where steps 4-6 are slightly changed, since here the classical proximal point method for DC functions is obtained for s_k = 1 and one can hence employ usual line search method.\n\nKeyword arguments\n\nλ: ( k -> 1/2 ) a function returning the sequence of prox parameters λ_k\ncost=nothing: provide the cost f, for debug reasons / analysis\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ngradient=nothing: specify operatornamegrad f, for debug / analysis or enhancing the stopping_criterion\nprox_g=nothing: specify a proximal map for the sub problem or both of the following\ng=nothing: specify the function g.\ngrad_g=nothing: specify the gradient of g. If both gand grad_g are specified, a subsolver is automatically set up.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=ConstantLength(): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8)): a functor indicating that the stopping criterion is fulfilled A StopWhenGradientNormLess(1e-8) is added with |, when a gradient is provided.\nsub_cost=ProximalDCCost(g, copy(M, p), λ(1))): cost to be used within the default sub_problem that is initialized as soon as g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=ProximalDCGrad(grad_g, copy(M, p), λ(1); evaluation=evaluation): gradient to be used within the default sub_problem, that is initialized as soon as grad_g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_stopping_criterion=(StopAfterIteration(300)|[StopWhenGradientNormLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/difference_of_convex/#Solver-states","page":"Difference of Convex","title":"Solver states","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"DifferenceOfConvexState\nDifferenceOfConvexProximalState","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.DifferenceOfConvexState","page":"Difference of Convex","title":"Manopt.DifferenceOfConvexState","text":"DifferenceOfConvexState{Pr,St,P,T,SC<:StoppingCriterion} <:\n AbstractManoptSolverState\n\nA struct to store the current state of the [difference_of_convex_algorithm])(@ref). It comes in two forms, depending on the realisation of the subproblem.\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring a subgradient at the current iterate\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\n\nThe sub task consists of a method to solve\n\n operatornameargmin_qmathcal M g(p) - X log_p q\n\nis needed. Besides a problem and a state, one can also provide a function and an AbstractEvaluationType, respectively, to indicate a closed form solution for the sub task.\n\nConstructors\n\nDifferenceOfConvexState(M, sub_problem, sub_state; kwargs...)\nDifferenceOfConvexState(M, sub_solver; evaluation=InplaceEvaluation(), kwargs...)\n\nGenerate the state either using a solver from Manopt, given by an AbstractManoptProblem sub_problem and an AbstractManoptSolverState sub_state, or a closed form solution sub_solver for the sub-problem the function expected to be of the form (M, p, X) -> q or (M, q, p, X) -> q, where by default its AbstractEvaluationType evaluation is in-place of q. Here the elements passed are the current iterate p and the subgradient X of h can be passed to that function.\n\nfurther keyword arguments\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopAfterIteration(200): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/#Manopt.DifferenceOfConvexProximalState","page":"Difference of Convex","title":"Manopt.DifferenceOfConvexProximalState","text":"DifferenceOfConvexProximalState{P, T, Pr, St, S<:Stepsize, SC<:StoppingCriterion, RTR<:AbstractRetractionMethod, ITR<:AbstractInverseRetractionMethod}\n <: AbstractSubProblemSolverState\n\nA struct to store the current state of the algorithm as well as the form. It comes in two forms, depending on the realisation of the subproblem.\n\nFields\n\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\np::P: a point on the manifold mathcal Mstoring the current iterate\nq::P: a point on the manifold mathcal M storing the gradient step\nr::P: a point on the manifold mathcal M storing the result of the proximal map\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nX, Y: the current gradient and descent direction, respectively their common type is set by the keyword X\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nConstructor\n\nDifferenceOfConvexProximalState(M::AbstractManifold, sub_problem, sub_state; kwargs...)\n\nconstruct an difference of convex proximal point state\n\nDifferenceOfConvexProximalState(M::AbstractManifold, sub_problem;\n evaluation=AllocatingEvaluation(), kwargs...\n\n)\n\nconstruct an difference of convex proximal point state, where sub_problem is a closed form solution with evaluation as type of evaluation.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nsub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nKeyword arguments\n\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=ConstantLength(): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopWhenChangeLess`(1e-8): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/#The-difference-of-convex-objective","page":"Difference of Convex","title":"The difference of convex objective","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"ManifoldDifferenceOfConvexObjective","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.ManifoldDifferenceOfConvexObjective","page":"Difference of Convex","title":"Manopt.ManifoldDifferenceOfConvexObjective","text":"ManifoldDifferenceOfConvexObjective{E} <: AbstractManifoldCostObjective{E}\n\nSpecify an objective for a difference_of_convex_algorithm.\n\nThe objective f mathcal M ℝ is given as\n\n f(p) = g(p) - h(p)\n\nwhere both g and h are convex, lower semicontinuous and proper. Furthermore the subdifferential h of h is required.\n\nFields\n\ncost: an implementation of f(p) = g(p)-h(p) as a function f(M,p).\n∂h!!: a deterministic version of h mathcal M Tmathcal M, in the sense that calling ∂h(M, p) returns a subgradient of h at p and if there is more than one, it returns a deterministic choice.\n\nNote that the subdifferential might be given in two possible signatures\n\n∂h(M,p) which does an AllocatingEvaluation\n∂h!(M, X, p) which does an InplaceEvaluation in place of X.\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"as well as for the corresponding sub problem","category":"page"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"LinearizedDCCost\nLinearizedDCGrad","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.LinearizedDCCost","page":"Difference of Convex","title":"Manopt.LinearizedDCCost","text":"LinearizedDCCost\n\nA functor (M,q) → ℝ to represent the inner problem of a ManifoldDifferenceOfConvexObjective. This is a cost function of the form\n\n F_p_kX_k(p) = g(p) - X_k log_p_kp\n\nfor a point p_k and a tangent vector X_k at p_k (for example outer iterates) that are stored within this functor as well.\n\nFields\n\ng a function\npk a point on a manifold\nXk a tangent vector at pk\n\nBoth interim values can be set using set_parameter!(::LinearizedDCCost, ::Val{:p}, p) and set_parameter!(::LinearizedDCCost, ::Val{:X}, X), respectively.\n\nConstructor\n\nLinearizedDCCost(g, p, X)\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/#Manopt.LinearizedDCGrad","page":"Difference of Convex","title":"Manopt.LinearizedDCGrad","text":"LinearizedDCGrad\n\nA functor (M,X,p) → ℝ to represent the gradient of the inner problem of a ManifoldDifferenceOfConvexObjective. This is a gradient function of the form\n\n F_p_kX_k(p) = g(p) - X_k log_p_kp\n\nits gradient is given by using F=F_1(F_2(p)), where F_1(X) = X_kX and F_2(p) = log_p_kp and the chain rule as well as the adjoint differential of the logarithmic map with respect to its argument for D^*F_2(p)\n\n operatornamegrad F(q) = operatornamegrad f(q) - DF_2^*(q)X\n\nfor a point pk and a tangent vector Xk at pk (the outer iterates) that are stored within this functor as well\n\nFields\n\ngrad_g!! the gradient of g (see also LinearizedDCCost)\npk a point on a manifold\nXk a tangent vector at pk\n\nBoth interim values can be set using set_parameter!(::LinearizedDCGrad, ::Val{:p}, p) and set_parameter!(::LinearizedDCGrad, ::Val{:X}, X), respectively.\n\nConstructor\n\nLinearizedDCGrad(grad_g, p, X; evaluation=AllocatingEvaluation())\n\nWhere you specify whether grad_g is AllocatingEvaluation or InplaceEvaluation, while this function still provides both signatures.\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"ManifoldDifferenceOfConvexProximalObjective","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.ManifoldDifferenceOfConvexProximalObjective","page":"Difference of Convex","title":"Manopt.ManifoldDifferenceOfConvexProximalObjective","text":"ManifoldDifferenceOfConvexProximalObjective{E} <: Problem\n\nSpecify an objective difference_of_convex_proximal_point algorithm. The problem is of the form\n\n operatorname*argmin_pmathcal M g(p) - h(p)\n\nwhere both g and h are convex, lower semicontinuous and proper.\n\nFields\n\ncost: implementation of f(p) = g(p)-h(p)\ngradient: the gradient of the cost\ngrad_h!!: a function operatornamegradh mathcal M Tmathcal M,\n\nNote that both the gradients might be given in two possible signatures as allocating or in-place.\n\nConstructor\n\nManifoldDifferenceOfConvexProximalObjective(gradh; cost=nothing, gradient=nothing)\n\nan note that neither cost nor gradient are required for the algorithm, just for eventual debug or stopping criteria.\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"as well as for the corresponding sub problems","category":"page"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"ProximalDCCost\nProximalDCGrad","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.ProximalDCCost","page":"Difference of Convex","title":"Manopt.ProximalDCCost","text":"ProximalDCCost\n\nA functor (M, p) → ℝ to represent the inner cost function of a ManifoldDifferenceOfConvexProximalObjective. This is the cost function of the proximal map of g.\n\n F_p_k(p) = frac12λd_mathcal M(p_kp)^2 + g(p)\n\nfor a point pk and a proximal parameter λ.\n\nFields\n\ng - a function\npk - a point on a manifold\nλ - the prox parameter\n\nBoth interim values can be set using set_parameter!(::ProximalDCCost, ::Val{:p}, p) and set_parameter!(::ProximalDCCost, ::Val{:λ}, λ), respectively.\n\nConstructor\n\nProximalDCCost(g, p, λ)\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/#Manopt.ProximalDCGrad","page":"Difference of Convex","title":"Manopt.ProximalDCGrad","text":"ProximalDCGrad\n\nA functor (M,X,p) → ℝ to represent the gradient of the inner cost function of a ManifoldDifferenceOfConvexProximalObjective. This is the gradient function of the proximal map cost function of g. Based on\n\n F_p_k(p) = frac12λd_mathcal M(p_kp)^2 + g(p)\n\nit reads\n\n operatornamegrad F_p_k(p) = operatornamegrad g(p) - frac1λlog_p p_k\n\nfor a point pk and a proximal parameter λ.\n\nFields\n\ngrad_g - a gradient function\npk - a point on a manifold\nλ - the prox parameter\n\nBoth interim values can be set using set_parameter!(::ProximalDCGrad, ::Val{:p}, p) and set_parameter!(::ProximalDCGrad, ::Val{:λ}, λ), respectively.\n\nConstructor\n\nProximalDCGrad(grad_g, pk, λ; evaluation=AllocatingEvaluation())\n\nWhere you specify whether grad_g is AllocatingEvaluation or InplaceEvaluation, while this function still always provides both signatures.\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/#Helper-functions","page":"Difference of Convex","title":"Helper functions","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"get_subtrahend_gradient","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.get_subtrahend_gradient","page":"Difference of Convex","title":"Manopt.get_subtrahend_gradient","text":"X = get_subtrahend_gradient(amp, q)\nget_subtrahend_gradient!(amp, X, q)\n\nEvaluate the (sub)gradient of the subtrahend h from within a ManifoldDifferenceOfConvexObjective amp at the point q (in place of X).\n\nThe evaluation is done in place of X for the !-variant. The T=AllocatingEvaluation problem might still allocate memory within. When the non-mutating variant is called with a T=InplaceEvaluation memory for the result is allocated.\n\n\n\n\n\nX = get_subtrahend_gradient(M::AbstractManifold, dcpo::ManifoldDifferenceOfConvexProximalObjective, p)\nget_subtrahend_gradient!(M::AbstractManifold, X, dcpo::ManifoldDifferenceOfConvexProximalObjective, p)\n\nEvaluate the gradient of the subtrahend h from within a ManifoldDifferenceOfConvexProximalObjectivePat the pointp` (in place of X).\n\n\n\n\n\n","category":"function"},{"location":"solvers/difference_of_convex/#sec-cp-technical-details","page":"Difference of Convex","title":"Technical details","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"The difference_of_convex_algorithm and difference_of_convex_proximal_point solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= or retraction_method_dual= (for mathcal N) does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified.","category":"page"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"By default, one of the stopping criteria is StopWhenChangeLess, which either requires","category":"page"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= or retraction_method_dual= (for mathcal N) does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified or the distance(M, p, q) for said default inverse retraction.\nA `copyto!(M, q, p) and copy(M,p) for points.\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).\neverything the subsolver requires, which by default is the trust_regions or if you do not provide a Hessian gradient_descent.","category":"page"},{"location":"solvers/difference_of_convex/#Literature","page":"Difference of Convex","title":"Literature","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"Y. T. Almeida, J. X. Cruz Neto, P. R. Oliveira and J. C. Oliveira Souza. A modified proximal point method for DC functions on Hadamard manifolds. Computational Optimization and Applications 76, 649–673 (2020).\n\n\n\nR. Bergmann, O. P. Ferreira, E. M. Santos and J. C. Souza. The difference of convex algorithm on Hadamard manifolds, arXiv preprint (2023).\n\n\n\nJ. C. Souza and P. R. Oliveira. A proximal point algorithm for DC fuctions on Hadamard manifolds. Journal of Global Optimization 63, 797–810 (2015).\n\n\n\n","category":"page"},{"location":"solvers/interior_point_Newton/#Interior-point-Newton-method","page":"Interior Point Newton","title":"Interior point Newton method","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"interior_point_Newton\ninterior_point_Newton!","category":"page"},{"location":"solvers/interior_point_Newton/#Manopt.interior_point_Newton","page":"Interior Point Newton","title":"Manopt.interior_point_Newton","text":"interior_point_Newton(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\ninterior_point_Newton(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\ninterior_point_Newton!(M, f, grad]_f, Hess_f, p; kwargs...)\ninterior_point_Newton(M, ConstrainedManifoldObjective, p; kwargs...)\n\nperform the interior point Newton method following [LY24].\n\nIn order to solve the constrained problem\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nThis algorithms iteratively solves the linear system based on extending the KKT system by a slack variable s.\n\noperatornameJ F(p μ λ s)X Y Z W = -F(p μ λ s)\ntext where \nX T_pmathcal M YW ℝ^m Z ℝ^n\n\nsee CondensedKKTVectorFieldJacobian and CondensedKKTVectorField, respectively, for the reduced form, this is usually solved in. From the resulting X and Z in the reeuced form, the other two, Y, W, are then computed.\n\nFrom the gradient (XYZW) at the current iterate (p μ λ s), a line search is performed using the KKTVectorFieldNormSq norm of the KKT vector field (squared) and its gradient KKTVectorFieldNormSqGradient together with the InteriorPointCentralityCondition.\n\nNote that since the vector field F includes the gradients of the constraint functions g h, its gradient or Jacobian requires the Hessians of the constraints.\n\nFor that seach direction a line search is performed, that additionally ensures that the constraints are further fulfilled.\n\nInput\n\nM: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nor a ConstrainedManifoldObjective cmo containing f, grad_f, Hess_f, and the constraints\n\nKeyword arguments\n\nThe keyword arguments related to the constraints (the first eleven) are ignored if you pass a ConstrainedManifoldObjective cmo\n\ncentrality_condition=missing; an additional condition when to accept a step size. This can be used to ensure that the resulting iterate is still an interior point if you provide a check (N,q) -> true/false, where N is the manifold of the step_problem.\nequality_constraints=nothing: the number n of equality constraints.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ng=nothing: the inequality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\ngradient_range=nothing: specify how gradients are represented, where nothing is equivalent to NestedPowerRepresentation\ngradient_equality_range=gradient_range: specify how the gradients of the equality constraints are represented\ngradient_inequality_range=gradient_range: specify how the gradients of the inequality constraints are represented\nh=nothing: the equality constraints\nHess_g=nothing: the Hessian of the inequality constraints\nHess_h=nothing: the Hessian of the equality constraints\ninequality_constraints=nothing: the number m of inequality constraints.\nλ=ones(length(h(M, p))): the Lagrange multiplier with respect to the equality constraints h\nμ=ones(length(g(M, p))): the Lagrange multiplier with respect to the inequality constraints g\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nρ=μ's / length(μ): store the orthogonality μ's/m to compute the barrier parameter β in the sub problem.\ns=copy(μ): initial value for the slack variables\nσ=calculate_σ(M, cmo, p, μ, λ, s): scaling factor for the barrier parameter β in the sub problem, which is updated during the iterations\nstep_objective: a ManifoldGradientObjective of the norm of the KKT vector field KKTVectorFieldNormSq and its gradient KKTVectorFieldNormSqGradient\nstep_problem: the manifold mathcal M ℝ^m ℝ^n ℝ^m together with the step_objective as the problem the linesearch stepsize= employs for determining a step size\nstep_state: the StepsizeState with point and search direction\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size with the centrality_condtion keyword as additional criterion to accept a step, if this is provided\nstopping_criterion=StopAfterIteration(200)|StopWhenKKTResidualLess(1e-8): a functor indicating that the stopping criterion is fulfilled a stopping criterion, by default depending on the residual of the KKT vector field or a maximal number of steps, which ever hits first.\nsub_kwargs=(;): keyword arguments to decorate the sub options, for example debug, that automatically respects the main solvers debug options (like sub-sampling) as well\nsub_objective: The SymmetricLinearSystemObjective modelling the system of equations to use in the sub solver, includes the CondensedKKTVectorFieldJacobian mathcal A(X) and the CondensedKKTVectorField b in mathcal A(X) + b = 0 we aim to solve. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_stopping_criterion=StopAfterIteration(manifold_dimension(M))|StopWhenRelativeResidualLess(c,1e-8), where c = lVert b rVert_ from the system to solve. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=ConjugateResidualState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nvector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components ℝ^mℝ^n\nX=zero_vector(M,p): th initial gradient with respect to p.\nY=zero(μ): the initial gradient with respct to μ\nZ=zero(λ): the initial gradient with respct to λ\nW=zero(s): the initial gradient with respct to s\n\nAs well as internal keywords used to set up these given keywords like _step_M, _step_p, _sub_M, _sub_p, and _sub_X, that should not be changed.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective, respectively.\n\nnote: Note\nThe centrality_condition=mising disables to check centrality during the line search, but you can pass InteriorPointCentralityCondition(cmo, γ), where γ is a constant, to activate this check.\n\nOutput\n\nThe obtained approximate constrained minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/interior_point_Newton/#Manopt.interior_point_Newton!","page":"Interior Point Newton","title":"Manopt.interior_point_Newton!","text":"interior_point_Newton(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\ninterior_point_Newton(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\ninterior_point_Newton!(M, f, grad]_f, Hess_f, p; kwargs...)\ninterior_point_Newton(M, ConstrainedManifoldObjective, p; kwargs...)\n\nperform the interior point Newton method following [LY24].\n\nIn order to solve the constrained problem\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nThis algorithms iteratively solves the linear system based on extending the KKT system by a slack variable s.\n\noperatornameJ F(p μ λ s)X Y Z W = -F(p μ λ s)\ntext where \nX T_pmathcal M YW ℝ^m Z ℝ^n\n\nsee CondensedKKTVectorFieldJacobian and CondensedKKTVectorField, respectively, for the reduced form, this is usually solved in. From the resulting X and Z in the reeuced form, the other two, Y, W, are then computed.\n\nFrom the gradient (XYZW) at the current iterate (p μ λ s), a line search is performed using the KKTVectorFieldNormSq norm of the KKT vector field (squared) and its gradient KKTVectorFieldNormSqGradient together with the InteriorPointCentralityCondition.\n\nNote that since the vector field F includes the gradients of the constraint functions g h, its gradient or Jacobian requires the Hessians of the constraints.\n\nFor that seach direction a line search is performed, that additionally ensures that the constraints are further fulfilled.\n\nInput\n\nM: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nor a ConstrainedManifoldObjective cmo containing f, grad_f, Hess_f, and the constraints\n\nKeyword arguments\n\nThe keyword arguments related to the constraints (the first eleven) are ignored if you pass a ConstrainedManifoldObjective cmo\n\ncentrality_condition=missing; an additional condition when to accept a step size. This can be used to ensure that the resulting iterate is still an interior point if you provide a check (N,q) -> true/false, where N is the manifold of the step_problem.\nequality_constraints=nothing: the number n of equality constraints.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ng=nothing: the inequality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\ngradient_range=nothing: specify how gradients are represented, where nothing is equivalent to NestedPowerRepresentation\ngradient_equality_range=gradient_range: specify how the gradients of the equality constraints are represented\ngradient_inequality_range=gradient_range: specify how the gradients of the inequality constraints are represented\nh=nothing: the equality constraints\nHess_g=nothing: the Hessian of the inequality constraints\nHess_h=nothing: the Hessian of the equality constraints\ninequality_constraints=nothing: the number m of inequality constraints.\nλ=ones(length(h(M, p))): the Lagrange multiplier with respect to the equality constraints h\nμ=ones(length(g(M, p))): the Lagrange multiplier with respect to the inequality constraints g\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nρ=μ's / length(μ): store the orthogonality μ's/m to compute the barrier parameter β in the sub problem.\ns=copy(μ): initial value for the slack variables\nσ=calculate_σ(M, cmo, p, μ, λ, s): scaling factor for the barrier parameter β in the sub problem, which is updated during the iterations\nstep_objective: a ManifoldGradientObjective of the norm of the KKT vector field KKTVectorFieldNormSq and its gradient KKTVectorFieldNormSqGradient\nstep_problem: the manifold mathcal M ℝ^m ℝ^n ℝ^m together with the step_objective as the problem the linesearch stepsize= employs for determining a step size\nstep_state: the StepsizeState with point and search direction\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size with the centrality_condtion keyword as additional criterion to accept a step, if this is provided\nstopping_criterion=StopAfterIteration(200)|StopWhenKKTResidualLess(1e-8): a functor indicating that the stopping criterion is fulfilled a stopping criterion, by default depending on the residual of the KKT vector field or a maximal number of steps, which ever hits first.\nsub_kwargs=(;): keyword arguments to decorate the sub options, for example debug, that automatically respects the main solvers debug options (like sub-sampling) as well\nsub_objective: The SymmetricLinearSystemObjective modelling the system of equations to use in the sub solver, includes the CondensedKKTVectorFieldJacobian mathcal A(X) and the CondensedKKTVectorField b in mathcal A(X) + b = 0 we aim to solve. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_stopping_criterion=StopAfterIteration(manifold_dimension(M))|StopWhenRelativeResidualLess(c,1e-8), where c = lVert b rVert_ from the system to solve. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=ConjugateResidualState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nvector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components ℝ^mℝ^n\nX=zero_vector(M,p): th initial gradient with respect to p.\nY=zero(μ): the initial gradient with respct to μ\nZ=zero(λ): the initial gradient with respct to λ\nW=zero(s): the initial gradient with respct to s\n\nAs well as internal keywords used to set up these given keywords like _step_M, _step_p, _sub_M, _sub_p, and _sub_X, that should not be changed.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective, respectively.\n\nnote: Note\nThe centrality_condition=mising disables to check centrality during the line search, but you can pass InteriorPointCentralityCondition(cmo, γ), where γ is a constant, to activate this check.\n\nOutput\n\nThe obtained approximate constrained minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/interior_point_Newton/#State","page":"Interior Point Newton","title":"State","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"InteriorPointNewtonState","category":"page"},{"location":"solvers/interior_point_Newton/#Manopt.InteriorPointNewtonState","page":"Interior Point Newton","title":"Manopt.InteriorPointNewtonState","text":"InteriorPointNewtonState{P,T} <: AbstractHessianSolverState\n\nFields\n\nλ: the Lagrange multiplier with respect to the equality constraints\nμ: the Lagrange multiplier with respect to the inequality constraints\np::P: a point on the manifold mathcal Mstoring the current iterate\ns: the current slack variable\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nX: the current gradient with respect to p\nY: the current gradient with respect to μ\nZ: the current gradient with respect to λ\nW: the current gradient with respect to s\nρ: store the orthogonality μ's/m to compute the barrier parameter β in the sub problem\nσ: scaling factor for the barrier parameter β in the sub problem\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstep_problem: an AbstractManoptProblem storing the manifold and objective for the line search\nstep_state: storing iterate and search direction in a state for the line search, see StepsizeState\n\nConstructor\n\nInteriorPointNewtonState(\n M::AbstractManifold,\n cmo::ConstrainedManifoldObjective,\n sub_problem::Pr,\n sub_state::St;\n kwargs...\n)\n\nInitialize the state, where both the AbstractManifold and the ConstrainedManifoldObjective are used to fill in reasonable defaults for the keywords.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\ncmo: a ConstrainedManifoldObjective\nsub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nKeyword arguments\n\nLet m and n denote the number of inequality and equality constraints, respectively\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nμ=ones(m)\nX=zero_vector(M,p)\nY=zero(μ)\nλ=zeros(n)\nZ=zero(λ)\ns=ones(m)\nW=zero(s)\nρ=μ's/m\nσ=calculate_σ(M, cmo, p, μ, λ, s)\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstep_objective=ManifoldGradientObjective(KKTVectorFieldNormSq(cmo), KKTVectorFieldNormSqGradient(cmo); evaluation=InplaceEvaluation())\nvector_space=Rn: a function that, given an integer, returns the manifold to be used for the vector space components ℝ^mℝ^n\nstep_problem: wrap the manifold mathcal M ℝ^m ℝ^n ℝ^m\nstep_state: the StepsizeState with point and search direction\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size with the InteriorPointCentralityCondition as additional condition to accept a step\n\nand internally _step_M and _step_p for the manifold and point in the stepsize.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Subproblem-functions","page":"Interior Point Newton","title":"Subproblem functions","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"CondensedKKTVectorField\nCondensedKKTVectorFieldJacobian\nKKTVectorField\nKKTVectorFieldJacobian\nKKTVectorFieldAdjointJacobian\nKKTVectorFieldNormSq\nKKTVectorFieldNormSqGradient","category":"page"},{"location":"solvers/interior_point_Newton/#Manopt.CondensedKKTVectorField","page":"Interior Point Newton","title":"Manopt.CondensedKKTVectorField","text":"CondensedKKTVectorField{O<:ConstrainedManifoldObjective,T,R} <: AbstractConstrainedSlackFunctor{T,R}\n\nGiven the constrained optimization problem\n\nbeginaligned\nmin_p mathcalM f(p)\ntextsubject to g_i(p)leq 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nThen reformulating the KKT conditions of the Lagrangian from the optimality conditions of the Lagrangian\n\nmathcal L(p μ λ) = f(p) + sum_j=1^n λ_jh_j(p) + sum_i=1^m μ_ig_i(p)\n\nin a perturbed / barrier method in a condensed form using a slack variable s ℝ^m and a barrier parameter β and the Riemannian gradient of the Lagrangian with respect to the first parameter operatornamegrad_p L(p μ λ).\n\nLet mathcal N = mathcal M ℝ^n. We obtain the linear system\n\nmathcal A(pλ)XY = -b(pλ)qquad textwhere (XY) T_(pλ)mathcal N\n\nwhere mathcal A T_(pλ)mathcal N T_(pλ)mathcal N is a linear operator and this struct models the right hand side b(pλ) T_(pλ)mathcal M given by\n\nb(pλ) = beginpmatrix\noperatornamegrad f(p)\n+ displaystylesum_j=1^n λ_j operatornamegrad h_j(p)\n+ displaystylesum_i=1^m μ_i operatornamegrad g_i(p)\n+ displaystylesum_i=1^m fracμ_is_ibigl(\n μ_i(g_i(p)+s_i) + β - μ_is_i\nbigr)operatornamegrad g_i(p)\nh(p)\nendpmatrix\n\nFields\n\ncmo the ConstrainedManifoldObjective\nμ::T the vector in ℝ^m of coefficients for the inequality constraints\ns::T the vector in ℝ^m of sclack variables\nβ::R the barrier parameter βℝ\n\nConstructor\n\nCondensedKKTVectorField(cmo, μ, s, β)\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.CondensedKKTVectorFieldJacobian","page":"Interior Point Newton","title":"Manopt.CondensedKKTVectorFieldJacobian","text":"CondensedKKTVectorFieldJacobian{O<:ConstrainedManifoldObjective,T,R} <: AbstractConstrainedSlackFunctor{T,R}\n\nGiven the constrained optimization problem\n\nbeginaligned\nmin_p mathcalM f(p)\ntextsubject to g_i(p)leq 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nwe reformulate the KKT conditions of the Lagrangian from the optimality conditions of the Lagrangian\n\nmathcal L(p μ λ) = f(p) + sum_j=1^n λ_jh_j(p) + sum_i=1^m μ_ig_i(p)\n\nin a perturbed / barrier method enhanced as well as condensed form as using operatornamegrad_o L(p μ λ) the Riemannian gradient of the Lagrangian with respect to the first parameter.\n\nLet mathcal N = mathcal M ℝ^n. We obtain the linear system\n\nmathcal A(pλ)XY = -b(pλ)qquad textwhere X T_pmathcal M Y ℝ^n\n\nwhere mathcal A T_(pλ)mathcal N T_(pλ)mathcal N is a linear operator on T_(pλ)mathcal N = T_pmathcal M ℝ^n given by\n\nmathcal A(pλ)XY = beginpmatrix\noperatornameHess_pmathcal L(p μ λ)X\n+ displaystylesum_i=1^m fracμ_is_ioperatornamegrad g_i(p) Xoperatornamegrad g_i(p)\n+ displaystylesum_j=1^n Y_j operatornamegrad h_j(p)\n\nBigl( operatornamegrad h_j(p) X Bigr)_j=1^n\nendpmatrix\n\nFields\n\ncmo the ConstrainedManifoldObjective\nμ::V the vector in ℝ^m of coefficients for the inequality constraints\ns::V the vector in ℝ^m of slack variables\nβ::R the barrier parameter βℝ\n\nConstructor\n\nCondensedKKTVectorFieldJacobian(cmo, μ, s, β)\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.KKTVectorField","page":"Interior Point Newton","title":"Manopt.KKTVectorField","text":"KKTVectorField{O<:ConstrainedManifoldObjective}\n\nImplement the vectorfield F KKT-conditions, inlcuding a slack variable for the inequality constraints.\n\nGiven the LagrangianCost\n\nmathcal L(p μ λ) = f(p) + sum_i=1^m μ_ig_i(p) + sum_j=1^n λ_jh_j(p)\n\nthe LagrangianGradient\n\noperatornamegradmathcal L(p μ λ) = operatornamegradf(p) + sum_j=1^n λ_j operatornamegrad h_j(p) + sum_i=1^m μ_i operatornamegrad g_i(p)\n\nand introducing the slack variables s=-g(p) ℝ^m the vector field is given by\n\nF(p μ λ s) = beginpmatrix\noperatornamegrad_p mathcal L(p μ λ)\ng(p) + s\nh(p)\nμ s\nendpmatrix text where p in mathcal M μ s in ℝ^mtext and λ in ℝ^n\n\nwhere denotes the Hadamard (or elementwise) product\n\nFields\n\ncmo the ConstrainedManifoldObjective\n\nWhile the point p is arbitrary and usually not needed, it serves as internal memory in the computations. Furthermore Both fields together also calrify the product manifold structure to use.\n\nConstructor\n\nKKTVectorField(cmo::ConstrainedManifoldObjective)\n\nExample\n\nDefine F = KKTVectorField(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of mathcal Mℝ^mℝ^nℝ^m. Then, you can call this cost as F(N, q) or as the in-place variant F(N, Y, q), where q is a point on N and Y is a tangent vector at q for the result.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.KKTVectorFieldJacobian","page":"Interior Point Newton","title":"Manopt.KKTVectorFieldJacobian","text":"KKTVectorFieldJacobian{O<:ConstrainedManifoldObjective}\n\nImplement the Jacobian of the vector field F of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField and KKTVectorFieldAdjointJacobian..\n\noperatornameJ F(p μ λ s)X Y Z W = beginpmatrix\n operatornameHess_p mathcal L(p μ λ)X + displaystylesum_i=1^m Y_i operatornamegrad g_i(p) + displaystylesum_j=1^n Z_j operatornamegrad h_j(p)\n Bigl( operatornamegrad g_i(p) X + W_iBigr)_i=1^m\n Bigl( operatornamegrad h_j(p) X Bigr)_j=1^n\n μ W + s Y\nendpmatrix\n\nwhere denotes the Hadamard (or elementwise) product\n\nSee also the LagrangianHessian operatornameHess_p mathcal L(p μ λ)X.\n\nFields\n\ncmo the ConstrainedManifoldObjective\n\nConstructor\n\nKKTVectorFieldJacobian(cmo::ConstrainedManifoldObjective)\n\nGenerate the Jacobian of the KKT vector field related to some ConstrainedManifoldObjective cmo.\n\nExample\n\nDefine JF = KKTVectorFieldJacobian(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of mathcal Mℝ^mℝ^nℝ^m. Then, you can call this cost as JF(N, q, Y) or as the in-place variant JF(N, Z, q, Y), where q is a point on N and Y and Z are a tangent vector at q.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.KKTVectorFieldAdjointJacobian","page":"Interior Point Newton","title":"Manopt.KKTVectorFieldAdjointJacobian","text":"KKTVectorFieldAdjointJacobian{O<:ConstrainedManifoldObjective}\n\nImplement the Adjoint of the Jacobian of the vector field F of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField and KKTVectorFieldJacobian.\n\noperatornameJ^* F(p μ λ s)X Y Z W = beginpmatrix\n operatornameHess_p mathcal L(p μ λ)X + displaystylesum_i=1^m Y_i operatornamegrad g_i(p) + displaystylesum_j=1^n Z_j operatornamegrad h_j(p)\n Bigl( operatornamegrad g_i(p) X + s_iW_iBigr)_i=1^m\n Bigl( operatornamegrad h_j(p) X Bigr)_j=1^n\n μ W + Y\nendpmatrix\n\nwhere denotes the Hadamard (or elementwise) product\n\nSee also the LagrangianHessian operatornameHess_p mathcal L(p μ λ)X.\n\nFields\n\ncmo the ConstrainedManifoldObjective\n\nConstructor\n\nKKTVectorFieldAdjointJacobian(cmo::ConstrainedManifoldObjective)\n\nGenerate the Adjoint Jacobian of the KKT vector field related to some ConstrainedManifoldObjective cmo.\n\nExample\n\nDefine AdJF = KKTVectorFieldAdjointJacobian(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of mathcal Mℝ^mℝ^nℝ^m. Then, you can call this cost as AdJF(N, q, Y) or as the in-place variant AdJF(N, Z, q, Y), where q is a point on N and Y and Z are a tangent vector at q.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.KKTVectorFieldNormSq","page":"Interior Point Newton","title":"Manopt.KKTVectorFieldNormSq","text":"KKTVectorFieldNormSq{O<:ConstrainedManifoldObjective}\n\nImplement the square of the norm of the vectorfield F of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField, where this functor applies the norm to. In [LY24] this is called the merit function.\n\nFields\n\ncmo the ConstrainedManifoldObjective\n\nConstructor\n\nKKTVectorFieldNormSq(cmo::ConstrainedManifoldObjective)\n\nExample\n\nDefine f = KKTVectorFieldNormSq(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of mathcal Mℝ^mℝ^nℝ^m. Then, you can call this cost as f(N, q), where q is a point on N.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.KKTVectorFieldNormSqGradient","page":"Interior Point Newton","title":"Manopt.KKTVectorFieldNormSqGradient","text":"KKTVectorFieldNormSqGradient{O<:ConstrainedManifoldObjective}\n\nCompute the gradient of the KKTVectorFieldNormSq φ(pμλs) = lVert F(pμλs)rVert^2, that is of the norm squared of the KKTVectorField F.\n\nThis is given in [LY24] as the gradient of their merit function, which we can write with the adjoint J^* of the Jacobian\n\noperatornamegrad φ = 2operatornameJ^* F(p μ λ s)F(p μ λ s)\n\nand hence is computed with KKTVectorFieldAdjointJacobian and KKTVectorField.\n\nFor completeness, the gradient reads, using the LagrangianGradient L = operatornamegrad_p mathcal L(pμλ) T_pmathcal M, for a shorthand of the first component of F, as\n\noperatornamegrad φ\n=\n2 beginpmatrix\noperatornamegrad_p mathcal L(pμλ)L + (g_i(p) + s_i)operatornamegrad g_i(p) + h_j(p)operatornamegrad h_j(p)\n Bigl( operatornamegrad g_i(p) L + s_iBigr)_i=1^m + μ s s\n Bigl( operatornamegrad h_j(p) L Bigr)_j=1^n\n g + s + μ μ s\nendpmatrix\n\nwhere denotes the Hadamard (or elementwise) product.\n\nFields\n\ncmo the ConstrainedManifoldObjective\n\nConstructor\n\nKKTVectorFieldNormSqGradient(cmo::ConstrainedManifoldObjective)\n\nExample\n\nDefine grad_f = KKTVectorFieldNormSqGradient(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of mathcal Mℝ^mℝ^nℝ^m. Then, you can call this cost as grad_f(N, q) or as the in-place variant grad_f(N, Y, q), where q is a point on N and Y is a tangent vector at q returning the resulting gradient at.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Helpers","page":"Interior Point Newton","title":"Helpers","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"InteriorPointCentralityCondition\nManopt.calculate_σ","category":"page"},{"location":"solvers/interior_point_Newton/#Manopt.InteriorPointCentralityCondition","page":"Interior Point Newton","title":"Manopt.InteriorPointCentralityCondition","text":"InteriorPointCentralityCondition{CO,R}\n\nA functor to check the centrality condition.\n\nIn order to obtain a step in the linesearch performed within the interior_point_Newton, Section 6 of [LY24] propose the following additional conditions to hold inspired by the Euclidean case described in Section 6 [ETTZ96]:\n\nFor a given ConstrainedManifoldObjective assume consider the KKTVectorField F, that is we are at a point q = (p λ μ s) on mathcal M ℝ^m ℝ^n ℝ^mand a search direction V = (X Y Z W).\n\nThen, let\n\nτ_1 = fracmmin μ sμ^mathrmTs\nquadtext and quad\nτ_2 = fracμ^mathrmTslVert F(q) rVert\n\nwhere denotes the Hadamard (or elementwise) product.\n\nFor a new candidate q(α) = bigl(p(α) λ(α) μ(α) s(α)bigr) = (operatornameretr_p(αX) λ+αY μ+αZ s+αW), we then define two functions\n\nc_1(α) = min μ(α) s(α) - fracγτ_1 μ(α)^mathrmTs(α)m\nquadtext and quad\nc_2(α) = μ(α)^mathrmTs(α) γτ_2 lVert F(q(α)) rVert\n\nWhile the paper now states that the (Armijo) linesearch starts at a point tilde α, it is easier to include the condition that c_1(α) 0 and c_2(α) 0 into the linesearch as well.\n\nThe functor InteriorPointCentralityCondition(cmo, γ, μ, s, normKKT)(N,qα) defined here evaluates this condition and returns true if both c_1 and c_2 are nonnegative.\n\nFields\n\ncmo: a ConstrainedManifoldObjective\nγ: a constant\nτ1, τ2: the constants given in the formula.\n\nConstructor\n\nInteriorPointCentralityCondition(cmo, γ)\nInteriorPointCentralityCondition(cmo, γ, τ1, τ2)\n\nInitialise the centrality conditions. The parameters τ1, τ2 are initialise to zero if not provided.\n\nnote: Note\nBesides get_parameter for all three constants, and set_parameter! for γ, to update τ_1 and τ_2, call set_parameter(ipcc, :τ, N, q) to update both τ_1 and τ_2 according to the formulae above.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.calculate_σ","page":"Interior Point Newton","title":"Manopt.calculate_σ","text":"calculate_σ(M, cmo, p, μ, λ, s; kwargs...)\n\nCompute the new σ factor for the barrier parameter in interior_point_Newton as\n\nminfrac12 lVert F(p μ λ s)rVert^frac12 \n\nwhere F is the KKT vector field, hence the KKTVectorFieldNormSq is used.\n\nKeyword arguments\n\nvector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components ℝ^mℝ^n\nN the manifold mathcal M ℝ^m ℝ^n ℝ^m the vector field lives on (generated using vector_space)\nq provide memory on N for interims evaluation of the vector field\n\n\n\n\n\n","category":"function"},{"location":"solvers/interior_point_Newton/#Additional-stopping-criteria","page":"Interior Point Newton","title":"Additional stopping criteria","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"StopWhenKKTResidualLess","category":"page"},{"location":"solvers/interior_point_Newton/#Manopt.StopWhenKKTResidualLess","page":"Interior Point Newton","title":"Manopt.StopWhenKKTResidualLess","text":"StopWhenKKTResidualLess <: StoppingCriterion\n\nStop when the KKT residual\n\nr^2\n= \\lVert \\operatorname{grad}_p \\mathcal L(p, μ, λ) \\rVert^2\n+ \\sum_{i=1}^m [μ_i]_{-}^2 + [g_i(p)]_+^2 + \\lvert \\mu_ig_i(p)^2\n+ \\sum_{j=1}^n \\lvert h_i(p)\\rvert^2.\n\nis less than a given threshold r ε. We use v_+ = max0v and v_- = min0t for the positive and negative part of v, respectively\n\nFields\n\nε: a threshold\nresidual: store the last residual if the stopping criterion is hit.\nat_iteration:\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#References","page":"Interior Point Newton","title":"References","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"A. S. El-Bakry, R. A. Tapia, T. Tsuchiya and Y. Zhang. On the formulation and theory of the Newton interior-point method for nonlinear programming. Journal of Optimization Theory and Applications 89, 507–541 (1996).\n\n\n\nZ. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.\n\n\n\n","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/#solver-pdrssn","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton algorithm","text":"","category":"section"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"The Primal-dual Riemannian semismooth Newton Algorithm is a second-order method derived from the ChambollePock.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"The aim is to solve an optimization problem on a manifold with a cost function of the form","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"F(p) + G(Λ(p))","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"where Fmathcal M overlineℝ, Gmathcal N overlineℝ, and Λmathcal M mathcal N. If the manifolds mathcal M or mathcal N are not Hadamard, it has to be considered locally only, that is on geodesically convex sets mathcal C subset mathcal M and mathcal D subsetmathcal N such that Λ(mathcal C) subset mathcal D.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"The algorithm comes down to applying the Riemannian semismooth Newton method to the rewritten primal-dual optimality conditions. Define the vector field X mathcalM times mathcalT_n^* mathcalN rightarrow mathcalT mathcalM times mathcalT_n^* mathcalN as","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"Xleft(p xi_nright)=left(beginarrayc\n-log _p operatornameprox_sigma Fleft(exp _pleft(mathcalP_p leftarrow mleft(-sigmaleft(D_m Lambdaright)^*leftmathcalP_Lambda(m) leftarrow n xi_nrightright)^sharpright)right) \nxi_n-operatornameprox_tau G_n^*left(xi_n+tauleft(mathcalP_n leftarrow Lambda(m) D_m Lambdaleftlog _m prightright)^flatright)\nendarrayright)","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"and solve for X(pξ_n)=0.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"Given base points mmathcal C, n=Λ(m)mathcal D, initial primal and dual values p^(0) mathcal C, ξ_n^(0) mathcal T_n^*mathcal N, and primal and dual step sizes sigma, tau.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"The algorithms performs the steps k=1 (until a StoppingCriterion is reached)","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"Choose any element\nV^(k) _C X(p^(k)ξ_n^(k))\nof the Clarke generalized covariant derivative\nSolve\nV^(k) (d_p^(k) d_n^(k)) = - X(p^(k)ξ_n^(k))\nin the vector space mathcalT_p^(k) mathcalM times mathcalT_n^* mathcalN\nUpdate\np^(k+1) = exp_p^(k)(d_p^(k))\nand\nξ_n^(k+1) = ξ_n^(k) + d_n^(k)","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"Furthermore you can exchange the exponential map, the logarithmic map, and the parallel transport by a retraction, an inverse retraction and a vector transport.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"Finally you can also update the base points m and n during the iterations. This introduces a few additional vector transports. The same holds for the case that Λ(m^(k))neq n^(k) at some point. All these cases are covered in the algorithm.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"primal_dual_semismooth_Newton\nprimal_dual_semismooth_Newton!","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/#Manopt.primal_dual_semismooth_Newton","page":"Primal-dual Riemannian semismooth Newton","title":"Manopt.primal_dual_semismooth_Newton","text":"primal_dual_semismooth_Newton(M, N, cost, p, X, m, n, prox_F, diff_prox_F, prox_G_dual, diff_prox_dual_G, linearized_operator, adjoint_linearized_operator)\n\nPerform the Primal-Dual Riemannian semismooth Newton algorithm.\n\nGiven a cost function mathcal E mathcal M overlineℝ of the form\n\nmathcal E(p) = F(p) + G( Λ(p) )\n\nwhere F mathcal M overlineℝ, G mathcal N overlineℝ, and Λ mathcal M mathcal N. The remaining input parameters are\n\np, X: primal and dual start points pmathcal M and X T_nmathcal N\nm,n: base points on mathcal M and `\\mathcal N, respectively.\nlinearized_forward_operator: the linearization DΛ() of the operator Λ().\nadjoint_linearized_operator: the adjoint DΛ^* of the linearized operator DΛ(m) T_mmathcal M T_Λ(m)mathcal N\nprox_F, prox_G_Dual: the proximal maps of F and G^ast_n\ndiff_prox_F, diff_prox_dual_G: the (Clarke Generalized) differentials of the proximal maps of F and G^ast_n\n\nFor more details on the algorithm, see [DL21].\n\nKeyword arguments\n\ndual_stepsize=1/sqrt(8): proximal parameter of the dual prox\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nΛ=missing: the exact operator, that is required if Λ(m)=n does not hold; missing indicates, that the forward operator is exact.\nprimal_stepsize=1/sqrt(8): proximal parameter of the primal prox\nreg_param=1e-5: regularisation parameter for the Newton matrix Note that this changes the arguments the forward_operator is called.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(50): a functor indicating that the stopping criterion is fulfilled\nupdate_primal_base=missing: function to update m (identity by default/missing)\nupdate_dual_base=missing: function to update n (identity by default/missing)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/primal_dual_semismooth_Newton/#Manopt.primal_dual_semismooth_Newton!","page":"Primal-dual Riemannian semismooth Newton","title":"Manopt.primal_dual_semismooth_Newton!","text":"primal_dual_semismooth_Newton(M, N, cost, p, X, m, n, prox_F, diff_prox_F, prox_G_dual, diff_prox_dual_G, linearized_operator, adjoint_linearized_operator)\n\nPerform the Primal-Dual Riemannian semismooth Newton algorithm.\n\nGiven a cost function mathcal E mathcal M overlineℝ of the form\n\nmathcal E(p) = F(p) + G( Λ(p) )\n\nwhere F mathcal M overlineℝ, G mathcal N overlineℝ, and Λ mathcal M mathcal N. The remaining input parameters are\n\np, X: primal and dual start points pmathcal M and X T_nmathcal N\nm,n: base points on mathcal M and `\\mathcal N, respectively.\nlinearized_forward_operator: the linearization DΛ() of the operator Λ().\nadjoint_linearized_operator: the adjoint DΛ^* of the linearized operator DΛ(m) T_mmathcal M T_Λ(m)mathcal N\nprox_F, prox_G_Dual: the proximal maps of F and G^ast_n\ndiff_prox_F, diff_prox_dual_G: the (Clarke Generalized) differentials of the proximal maps of F and G^ast_n\n\nFor more details on the algorithm, see [DL21].\n\nKeyword arguments\n\ndual_stepsize=1/sqrt(8): proximal parameter of the dual prox\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nΛ=missing: the exact operator, that is required if Λ(m)=n does not hold; missing indicates, that the forward operator is exact.\nprimal_stepsize=1/sqrt(8): proximal parameter of the primal prox\nreg_param=1e-5: regularisation parameter for the Newton matrix Note that this changes the arguments the forward_operator is called.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(50): a functor indicating that the stopping criterion is fulfilled\nupdate_primal_base=missing: function to update m (identity by default/missing)\nupdate_dual_base=missing: function to update n (identity by default/missing)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/primal_dual_semismooth_Newton/#State","page":"Primal-dual Riemannian semismooth Newton","title":"State","text":"","category":"section"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"PrimalDualSemismoothNewtonState","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/#Manopt.PrimalDualSemismoothNewtonState","page":"Primal-dual Riemannian semismooth Newton","title":"Manopt.PrimalDualSemismoothNewtonState","text":"PrimalDualSemismoothNewtonState <: AbstractPrimalDualSolverState\n\nFields\n\nm::P: a point on the manifold mathcal M\nn::Q: a point on the manifold mathcal N\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal M\nprimal_stepsize::Float64: proximal parameter of the primal prox\ndual_stepsize::Float64: proximal parameter of the dual prox\nreg_param::Float64: regularisation parameter for the Newton matrix\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nupdate_primal_base: function to update the primal base\nupdate_dual_base: function to update the dual base\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nwhere for the update functions a AbstractManoptProblem amp, AbstractManoptSolverState ams and the current iterate i are the arguments. If you activate these to be different from the default identity, you have to provide p.Λ for the algorithm to work (which might be missing).\n\nConstructor\n\nPrimalDualSemismoothNewtonState(M::AbstractManifold; kwargs...)\n\nGenerate a state for the primal_dual_semismooth_Newton.\n\nKeyword arguments\n\nm=rand(M)\nn=rand(N)\np=rand(M)\nX=zero_vector(M, p)\nprimal_stepsize=1/sqrt(8)\ndual_stepsize=1/sqrt(8)\nreg_param=1e-5\nupdate_primal_base=(amp, ams, k) -> o.m\nupdate_dual_base=(amp, ams, k) -> o.n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nstopping_criterion=[StopAfterIteration](@ref)(50)`: a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/primal_dual_semismooth_Newton/#sec-ssn-technical-details","page":"Primal-dual Riemannian semismooth Newton","title":"Technical details","text":"","category":"section"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"The primal_dual_semismooth_Newton solver requires the following functions of a manifold to be available for both the manifold mathcal Mand mathcal N","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= does not have to be specified.\nA `copyto!(M, q, p) and copy(M,p) for points.\nA get_basis for the DefaultOrthonormalBasis on mathcal M\nexp and log (on mathcal M)\nA DiagonalizingOrthonormalBasis to compute the differentials of the exponential and logarithmic map\nTangent vectors storing the social and cognitive vectors are initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/#Literature","page":"Primal-dual Riemannian semismooth Newton","title":"Literature","text":"","category":"section"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"W. Diepeveen and J. Lellmann. An Inexact Semismooth Newton Method on Riemannian Manifolds with Application to Duality-Based Total Variation Denoising. SIAM Journal on Imaging Sciences 14, 1565–1600 (2021), arXiv:2102.10309.\n\n\n\n","category":"page"},{"location":"solvers/DouglasRachford/#Douglas—Rachford-algorithm","page":"Douglas—Rachford","title":"Douglas—Rachford algorithm","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"The (Parallel) Douglas—Rachford ((P)DR) algorithm was generalized to Hadamard manifolds in [BPS16].","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"The aim is to minimize the sum","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"f(p) = g(p) + h(p)","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"on a manifold, where the two summands have proximal maps operatornameprox_λ g operatornameprox_λ h that are easy to evaluate (maybe in closed form, or not too costly to approximate). Further, define the reflection operator at the proximal map as","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"operatornamerefl_λ g(p) = operatornameretr_operatornameprox_λ g(p) bigl( -operatornameretr^-1_operatornameprox_λ g(p) p bigr)","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Let alpha_k 01 with sum_k ℕ alpha_k(1-alpha_k) = infty and λ 0 (which might depend on iteration k as well) be given.","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Then the (P)DRA algorithm for initial data p^(0) mathcal M as","category":"page"},{"location":"solvers/DouglasRachford/#Initialization","page":"Douglas—Rachford","title":"Initialization","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Initialize q^(0) = p^(0) and k=0","category":"page"},{"location":"solvers/DouglasRachford/#Iteration","page":"Douglas—Rachford","title":"Iteration","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Repeat until a convergence criterion is reached","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Compute r^(k) = operatornamerefl_λ goperatornamerefl_λ h(q^(k))\nWithin that operation, store p^(k+1) = operatornameprox_λ h(q^(k)) which is the prox the inner reflection reflects at.\nCompute q^(k+1) = g(alpha_k q^(k) r^(k)), where g is a curve approximating the shortest geodesic, provided by a retraction and its inverse\nSet k = k+1","category":"page"},{"location":"solvers/DouglasRachford/#Result","page":"Douglas—Rachford","title":"Result","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"The result is given by the last computed p^(K) at the last iterate K.","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"For the parallel version, the first proximal map is a vectorial version where in each component one prox is applied to the corresponding copy of t_k and the second proximal map corresponds to the indicator function of the set, where all copies are equal (in mathcal M^n, where n is the number of copies), leading to the second prox being the Riemannian mean.","category":"page"},{"location":"solvers/DouglasRachford/#Interface","page":"Douglas—Rachford","title":"Interface","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":" DouglasRachford\n DouglasRachford!","category":"page"},{"location":"solvers/DouglasRachford/#Manopt.DouglasRachford","page":"Douglas—Rachford","title":"Manopt.DouglasRachford","text":"DouglasRachford(M, f, proxes_f, p)\nDouglasRachford(M, mpo, p)\nDouglasRachford!(M, f, proxes_f, p)\nDouglasRachford!(M, mpo, p)\n\nCompute the Douglas-Rachford algorithm on the manifold mathcal M, starting from pgiven the (two) proximal mapsproxes_f`, see [BPS16].\n\nFor k2 proximal maps, the problem is reformulated using the parallel Douglas Rachford: a vectorial proximal map on the power manifold mathcal M^k is introduced as the first proximal map and the second proximal map of the is set to the mean (Riemannian center of mass). This hence also boils down to two proximal maps, though each evaluates proximal maps in parallel, that is, component wise in a vector.\n\nnote: Note\n\n\nThe parallel Douglas Rachford does not work in-place for now, since while creating the new staring point p' on the power manifold, a copy of p Is created\n\nIf you provide a ManifoldProximalMapObjective mpo instead, the proximal maps are kept unchanged.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\nproxes_f: functions of the form (M, λ, p)-> q performing a proximal maps, where ⁠λ denotes the proximal parameter, for each of the summands of F. These can also be given in the InplaceEvaluation variants (M, q, λ p) -> q computing in place of q.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nα= k -> 0.9: relaxation of the step from old to new iterate, to be precise p^(k+1) = g(α_k p^(k) q^(k)), where q^(k) is the result of the double reflection involved in the DR algorithm and g is a curve induced by the retraction and its inverse.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses This is used both in the relaxation step as well as in the reflection, unless you set R yourself.\nλ= k -> 1.0: function to provide the value for the proximal parameter λ_k\nR=reflect(!): method employed in the iteration to perform the reflection of p at the prox of p. This uses by default reflect or reflect! depending on reflection_evaluation and the retraction and inverse retraction specified by retraction_method and inverse_retraction_method, respectively.\nreflection_evaluation: (AllocatingEvaluation whether R works in-place or allocating\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions This is used both in the relaxation step as well as in the reflection, unless you set R yourself.\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-5): a functor indicating that the stopping criterion is fulfilled\nparallel=false: indicate whether to use a parallel Douglas-Rachford or not.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\nDouglasRachford(M, f, proxes_f, p; kwargs...)\n\na doc string with some math t_k+1 = g(α_k t_k s_k)\n\n\n\n\n\n","category":"function"},{"location":"solvers/DouglasRachford/#Manopt.DouglasRachford!","page":"Douglas—Rachford","title":"Manopt.DouglasRachford!","text":"DouglasRachford(M, f, proxes_f, p)\nDouglasRachford(M, mpo, p)\nDouglasRachford!(M, f, proxes_f, p)\nDouglasRachford!(M, mpo, p)\n\nCompute the Douglas-Rachford algorithm on the manifold mathcal M, starting from pgiven the (two) proximal mapsproxes_f`, see [BPS16].\n\nFor k2 proximal maps, the problem is reformulated using the parallel Douglas Rachford: a vectorial proximal map on the power manifold mathcal M^k is introduced as the first proximal map and the second proximal map of the is set to the mean (Riemannian center of mass). This hence also boils down to two proximal maps, though each evaluates proximal maps in parallel, that is, component wise in a vector.\n\nnote: Note\n\n\nThe parallel Douglas Rachford does not work in-place for now, since while creating the new staring point p' on the power manifold, a copy of p Is created\n\nIf you provide a ManifoldProximalMapObjective mpo instead, the proximal maps are kept unchanged.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\nproxes_f: functions of the form (M, λ, p)-> q performing a proximal maps, where ⁠λ denotes the proximal parameter, for each of the summands of F. These can also be given in the InplaceEvaluation variants (M, q, λ p) -> q computing in place of q.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nα= k -> 0.9: relaxation of the step from old to new iterate, to be precise p^(k+1) = g(α_k p^(k) q^(k)), where q^(k) is the result of the double reflection involved in the DR algorithm and g is a curve induced by the retraction and its inverse.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses This is used both in the relaxation step as well as in the reflection, unless you set R yourself.\nλ= k -> 1.0: function to provide the value for the proximal parameter λ_k\nR=reflect(!): method employed in the iteration to perform the reflection of p at the prox of p. This uses by default reflect or reflect! depending on reflection_evaluation and the retraction and inverse retraction specified by retraction_method and inverse_retraction_method, respectively.\nreflection_evaluation: (AllocatingEvaluation whether R works in-place or allocating\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions This is used both in the relaxation step as well as in the reflection, unless you set R yourself.\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-5): a functor indicating that the stopping criterion is fulfilled\nparallel=false: indicate whether to use a parallel Douglas-Rachford or not.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/DouglasRachford/#State","page":"Douglas—Rachford","title":"State","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":" DouglasRachfordState","category":"page"},{"location":"solvers/DouglasRachford/#Manopt.DouglasRachfordState","page":"Douglas—Rachford","title":"Manopt.DouglasRachfordState","text":"DouglasRachfordState <: AbstractManoptSolverState\n\nStore all options required for the DouglasRachford algorithm,\n\nFields\n\nα: relaxation of the step from old to new iterate, to be precise x^(k+1) = g(α(k) x^(k) t^(k)), where t^(k) is the result of the double reflection involved in the DR algorithm\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nλ: function to provide the value for the proximal parameter during the calls\nparallel: indicate whether to use a parallel Douglas-Rachford or not.\nR: method employed in the iteration to perform the reflection of x at the prox p.\np::P: a point on the manifold mathcal Mstoring the current iterate For the parallel Douglas-Rachford, this is not a value from the PowerManifold manifold but the mean.\nreflection_evaluation: whether R works in-place or allocating\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\ns: the last result of the double reflection at the proximal maps relaxed by α.\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\n\nConstructor\n\nDouglasRachfordState(M::AbstractManifold; kwargs...)\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\n\nKeyword arguments\n\nα= k -> 0.9: relaxation of the step from old to new iterate, to be precise x^(k+1) = g(α(k) x^(k) t^(k)), where t^(k) is the result of the double reflection involved in the DR algorithm\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nλ= k -> 1.0: function to provide the value for the proximal parameter during the calls\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nR=reflect(!): method employed in the iteration to perform the reflection of p at the prox of p, which function is used depends on reflection_evaluation.\nreflection_evaluation=AllocatingEvaluation()) specify whether the reflection works in-place or allocating (default)\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(300): a functor indicating that the stopping criterion is fulfilled\nparallel=false: indicate whether to use a parallel Douglas-Rachford or not.\n\n\n\n\n\n","category":"type"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"For specific DebugActions and RecordActions see also Cyclic Proximal Point.","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Furthermore, this solver has a short hand notation for the involved reflection.","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"reflect","category":"page"},{"location":"solvers/DouglasRachford/#Manopt.reflect","page":"Douglas—Rachford","title":"Manopt.reflect","text":"reflect(M, f, x; kwargs...)\nreflect!(M, q, f, x; kwargs...)\n\nreflect the point x from the manifold M at the point f(x) of the function f mathcal M mathcal M, given by\n\n operatornamerefl_f(x) = operatornamerefl_f(x)(x)\n\nCompute the result in q.\n\nsee also reflect(M,p,x), to which the keywords are also passed to.\n\n\n\n\n\nreflect(M, p, x, kwargs...)\nreflect!(M, q, p, x, kwargs...)\n\nReflect the point x from the manifold M at point p, given by\n\noperatornamerefl\n\nwhere operatornameretr and operatornameretr^-1 denote a retraction and an inverse retraction, respectively. This can also be done in place of q.\n\nKeyword arguments\n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\nand for the reflect! additionally\n\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M as temporary memory to compute the inverse retraction in place. otherwise this is the memory that would be allocated anyways.\n\n\n\n\n\nreflect(M, f, x; kwargs...)\nreflect!(M, q, f, x; kwargs...)\n\nreflect the point x from the manifold M at the point f(x) of the function f mathcal M mathcal M, given by\n\n operatornamerefl_f(x) = operatornamerefl_f(x)(x)\n\nCompute the result in q.\n\nsee also reflect(M,p,x), to which the keywords are also passed to.\n\n\n\n\n\nreflect(M, p, x, kwargs...)\nreflect!(M, q, p, x, kwargs...)\n\nReflect the point x from the manifold M at point p, given by\n\noperatornamerefl_p(q) = operatornameretr_p(-operatornameretr^-1_p q)\n\nwhere operatornameretr and operatornameretr^-1 denote a retraction and an inverse retraction, respectively.\n\nThis can also be done in place of q.\n\nKeyword arguments\n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\nand for the reflect! additionally\n\nX=zero_vector(M,p): a temporary memory to compute the inverse retraction in place. otherwise this is the memory that would be allocated anyways.\n\n\n\n\n\n","category":"function"},{"location":"solvers/DouglasRachford/#sec-dr-technical-details","page":"Douglas—Rachford","title":"Technical details","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"The DouglasRachford solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= does not have to be specified.\nA `copyto!(M, q, p) and copy(M,p) for points.","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"By default, one of the stopping criteria is StopWhenChangeLess, which requires","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified or the distance(M, p, q) for said default inverse retraction.","category":"page"},{"location":"solvers/DouglasRachford/#Literature","page":"Douglas—Rachford","title":"Literature","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"","category":"page"},{"location":"tutorials/CountAndCache/#How-to-count-and-cache-function-calls","page":"Count and use a cache","title":"How to count and cache function calls","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"In this tutorial, we want to investigate the caching and counting (statistics) features of Manopt.jl. We reuse the optimization tasks from the introductory tutorial Get started: optimize!.","category":"page"},{"location":"tutorials/CountAndCache/#Introduction","page":"Count and use a cache","title":"Introduction","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"There are surely many ways to keep track for example of how often the cost function is called, for example with a functor, as we used in an example in How to Record Data","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"mutable struct MyCost{I<:Integer}\n count::I\nend\nMyCost() = MyCost{Int64}(0)\nfunction (c::MyCost)(M, x)\n c.count += 1\n # [ .. Actual implementation of the cost here ]\nend","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"This still leaves a bit of work to the user, especially for tracking more than just the number of cost function evaluations.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"When a function like the objective or gradient is expensive to compute, it may make sense to cache its results. Manopt.jl tries to minimize the number of repeated calls but sometimes they are necessary and harmless when the function is cheap to compute. Caching of expensive function calls can for example be added using Memoize.jl by the user. The approach in the solvers of Manopt.jl aims to simplify adding both these capabilities on the level of calling a solver.","category":"page"},{"location":"tutorials/CountAndCache/#Technical-background","page":"Count and use a cache","title":"Technical background","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"The two ingredients for a solver in Manopt.jl are the AbstractManoptProblem and the AbstractManoptSolverState, where the former consists of the domain, that is the AsbtractManifold and AbstractManifoldObjective.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Both recording and debug capabilities are implemented in a decorator pattern to the solver state. They can be easily added using the record= and debug= in any solver call. This pattern was recently extended, such that also the objective can be decorated. This is how both caching and counting are implemented, as decorators of the AbstractManifoldObjective and hence for example changing/extending the behaviour of a call to get_cost.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Let’s finish off the technical background by loading the necessary packages. Besides Manopt.jl and Manifolds.jl we also need LRUCaches.jl which are (since Julia 1.9) a weak dependency and provide the least recently used strategy for our caches.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"using Manopt, Manifolds, Random, LRUCache, LinearAlgebra, ManifoldDiff\nusing ManifoldDiff: grad_distance","category":"page"},{"location":"tutorials/CountAndCache/#Counting","page":"Count and use a cache","title":"Counting","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"We first define our task, the Riemannian Center of Mass from the Get started: optimize! tutorial.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"n = 100\nσ = π / 8\nM = Sphere(2)\np = 1 / sqrt(2) * [1.0, 0.0, 1.0]\nRandom.seed!(42)\ndata = [exp(M, p, σ * rand(M; vector_at=p)) for i in 1:n];\nf(M, p) = sum(1 / (2 * n) * distance.(Ref(M), Ref(p), data) .^ 2)\ngrad_f(M, p) = sum(1 / n * grad_distance.(Ref(M), data, Ref(p)));","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"to now count how often the cost and the gradient are called, we use the count= keyword argument that works in any solver to specify the elements of the objective whose calls we want to count calls to. A full list is available in the documentation of the AbstractManifoldObjective. To also see the result, we have to set return_objective=true. This returns (objective, p) instead of just the solver result p. We can further also set return_state=true to get even more information about the solver run.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"gradient_descent(M, f, grad_f, data[1]; count=[:Cost, :Gradient], return_objective=true, return_state=true)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 66 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-8: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Statistics on function calls\n * :Gradient : 199\n * :Cost : 275","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"And we see that statistics are shown in the end.","category":"page"},{"location":"tutorials/CountAndCache/#Caching","page":"Count and use a cache","title":"Caching","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"To now also cache these calls, we can use the cache= keyword argument. Since now both the cache and the count “extend” the capability of the objective, the order is important: on the high-level interface, the count is treated first, which means that only actual function calls and not cache look-ups are counted. With the proper initialisation, you can use any caches here that support the get!(function, cache, key)! update. All parts of the objective that can currently be cached are listed at ManifoldCachedObjective. The solver call has a keyword cache that takes a tuple(c, vs, n) of three arguments, where c is a symbol for the type of cache, vs is a vector of symbols, which calls to cache and n is the size of the cache. If the last element is not provided, a suitable default (currentlyn=10) is used.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Here we want to use c=:LRU caches for vs=[Cost, :Gradient] with a size of n=25.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"r = gradient_descent(M, f, grad_f, data[1];\n count=[:Cost, :Gradient],\n cache=(:LRU, [:Cost, :Gradient], 25),\n return_objective=true, return_state=true)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 66 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-8: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Cache\n * :Cost : 25/25 entries of type Float64 used\n * :Gradient : 25/25 entries of type Vector{Float64} used\n\n## Statistics on function calls\n * :Gradient : 66\n * :Cost : 149","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Since the default setup with ArmijoLinesearch needs the gradient and the cost, and similarly the stopping criterion might (independently) evaluate the gradient, the caching is quite helpful here.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"And of course also for this advanced return value of the solver, we can still access the result as usual:","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"get_solver_result(r)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"3-element Vector{Float64}:\n 0.6868392807355564\n 0.006531599748261925\n 0.7267799809043942","category":"page"},{"location":"tutorials/CountAndCache/#Advanced-caching-examples","page":"Count and use a cache","title":"Advanced caching examples","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"There are more options other than caching single calls to specific parts of the objective. For example you may want to cache intermediate results of computing the cost and share that with the gradient computation. We present three solutions to this:","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"An easy approach from within Manopt.jl: the ManifoldCostGradientObjective\nA shared storage approach using a functor\nA shared (internal) cache approach also using a functor","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"For that we switch to another example: the Rayleigh quotient. We aim to maximize the Rayleigh quotient displaystylefracx^mathrmTAxx^mathrmTx, for some Aℝ^m+1times m+1 and xℝ^m+1 but since we consider this on the sphere and Manopt.jl (as many other optimization toolboxes) minimizes, we consider","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"g(p) = -p^mathrmTApqquad pmathbb S^m","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"The Euclidean gradient (that is in $ R^{m+1}$) is actually just nabla g(p) = -2Ap, the Riemannian gradient the projection of nabla g(p) onto the tangent space T_pmathbb S^m.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"m = 25\nRandom.seed!(42)\nA = randn(m + 1, m + 1)\nA = Symmetric(A)\np_star = eigvecs(A)[:, end] # minimizer (or similarly -p)\nf_star = -eigvals(A)[end] # cost (note that we get - the largest Eigenvalue)\n\nN = Sphere(m);\n\ng(M, p) = -p' * A*p\n∇g(p) = -2 * A * p\ngrad_g(M,p) = project(M, p, ∇g(p))\ngrad_g!(M,X, p) = project!(M, X, p, ∇g(p))","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"grad_g! (generic function with 1 method)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"But since both the cost and the gradient require the computation of the matrix-vector product Ap, it might be beneficial to only compute this once.","category":"page"},{"location":"tutorials/CountAndCache/#The-[ManifoldCostGradientObjective](@ref)-approach","page":"Count and use a cache","title":"The ManifoldCostGradientObjective approach","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"The ManifoldCostGradientObjective uses a combined function to compute both the gradient and the cost at the same time. We define the in-place variant as","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"function g_grad_g!(M::AbstractManifold, X, p)\n X .= -A*p\n c = p'*X\n X .*= 2\n project!(M, X, p, X)\n return (c, X)\nend","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"g_grad_g! (generic function with 1 method)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"where we only compute the matrix-vector product once. The small disadvantage might be, that we always compute both, the gradient and the cost. Luckily, the cache we used before, takes this into account and caches both results, such that we indeed end up computing A*p only once when asking to a cost and a gradient.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Let’s compare both methods","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"p0 = [(1/5 .* ones(5))..., zeros(m-4)...];\n@time s1 = gradient_descent(N, g, grad_g!, p0;\n stopping_criterion = StopWhenGradientNormLess(1e-5),\n evaluation=InplaceEvaluation(),\n count=[:Cost, :Gradient],\n cache=(:LRU, [:Cost, :Gradient], 25),\n return_objective=true,\n)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":" 1.364739 seconds (2.40 M allocations: 121.896 MiB, 1.43% gc time, 99.66% compilation time)\n\n## Cache\n * :Cost : 25/25 entries of type Float64 used\n * :Gradient : 25/25 entries of type Vector{Float64} used\n\n## Statistics on function calls\n * :Gradient : 602\n * :Cost : 1449\n\nTo access the solver result, call `get_solver_result` on this variable.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"versus","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"obj = ManifoldCostGradientObjective(g_grad_g!; evaluation=InplaceEvaluation())\n@time s2 = gradient_descent(N, obj, p0;\n stopping_criterion=StopWhenGradientNormLess(1e-5),\n count=[:Cost, :Gradient],\n cache=(:LRU, [:Cost, :Gradient], 25),\n return_objective=true,\n)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":" 0.789826 seconds (1.22 M allocations: 70.083 MiB, 99.07% compilation time)\n\n## Cache\n * :Cost : 25/25 entries of type Float64 used\n * :Gradient : 25/25 entries of type Vector{Float64} used\n\n## Statistics on function calls\n * :Gradient : 1448\n * :Cost : 1448\n\nTo access the solver result, call `get_solver_result` on this variable.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"first of all both yield the same result","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"p1 = get_solver_result(s1)\np2 = get_solver_result(s2)\n[distance(N, p1, p2), g(N, p1), g(N, p2), f_star]","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"4-element Vector{Float64}:\n 0.0\n -7.8032957637779\n -7.8032957637779\n -7.803295763793949","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"and we can see that the combined number of evaluations is once 2051, once just the number of cost evaluations 1449. Note that the involved additional 847 gradient evaluations are merely a multiplication with 2. On the other hand, the additional caching of the gradient in these cases might be less beneficial. It is beneficial, when the gradient and the cost are very often required together.","category":"page"},{"location":"tutorials/CountAndCache/#A-shared-storage-approach-using-a-functor","page":"Count and use a cache","title":"A shared storage approach using a functor","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"An alternative to the previous approach is the usage of a functor that introduces a “shared storage” of the result of computing A*p. We additionally have to store p though, since we have to make sure that we are still evaluating the cost and/or gradient at the same point at which the cached A*p was computed. We again consider the (more efficient) in-place variant. This can be done as follows","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"struct StorageG{T,M}\n A::M\n Ap::T\n p::T\nend\nfunction (g::StorageG)(::Val{:Cost}, M::AbstractManifold, p)\n if !(p==g.p) #We are at a new point -> Update\n g.Ap .= g.A*p\n g.p .= p\n end\n return -g.p'*g.Ap\nend\nfunction (g::StorageG)(::Val{:Gradient}, M::AbstractManifold, X, p)\n if !(p==g.p) #We are at a new point -> Update\n g.Ap .= g.A*p\n g.p .= p\n end\n X .= -2 .* g.Ap\n project!(M, X, p, X)\n return X\nend","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Here we use the first parameter to distinguish both functions. For the mutating case the signatures are different regardless of the additional argument but for the allocating case, the signatures of the cost and the gradient function are the same.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"#Define the new functor\nstorage_g = StorageG(A, zero(p0), zero(p0))\n# and cost and gradient that use this functor as\ng3(M,p) = storage_g(Val(:Cost), M, p)\ngrad_g3!(M, X, p) = storage_g(Val(:Gradient), M, X, p)\n@time s3 = gradient_descent(N, g3, grad_g3!, p0;\n stopping_criterion = StopWhenGradientNormLess(1e-5),\n evaluation=InplaceEvaluation(),\n count=[:Cost, :Gradient],\n cache=(:LRU, [:Cost, :Gradient], 2),\n return_objective=true#, return_state=true\n)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":" 0.604565 seconds (559.16 k allocations: 29.650 MiB, 2.85% gc time, 99.29% compilation time)\n\n## Cache\n * :Cost : 2/2 entries of type Float64 used\n * :Gradient : 2/2 entries of type Vector{Float64} used\n\n## Statistics on function calls\n * :Gradient : 602\n * :Cost : 1449\n\nTo access the solver result, call `get_solver_result` on this variable.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"This of course still yields the same result","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"p3 = get_solver_result(s3)\ng(N, p3) - f_star","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"1.6049384043981263e-11","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"And while we again have a split off the cost and gradient evaluations, we can observe that the allocations are less than half of the previous approach.","category":"page"},{"location":"tutorials/CountAndCache/#A-local-cache-approach","page":"Count and use a cache","title":"A local cache approach","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"This variant is very similar to the previous one, but uses a whole cache instead of just one place to store A*p. This makes the code a bit nicer, and it is possible to store more than just the last p either cost or gradient was called with.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"struct CacheG{C,M}\n A::M\n cache::C\nend\nfunction (g::CacheG)(::Val{:Cost}, M, p)\n Ap = get!(g.cache, copy(M,p)) do\n g.A*p\n end\n return -p'*Ap\nend\nfunction (g::CacheG)(::Val{:Gradient}, M, X, p)\n Ap = get!(g.cache, copy(M,p)) do\n g.A*p\n end\n X .= -2 .* Ap\n project!(M, X, p, X)\n return X\nend","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"However, the resulting solver run is not always faster, since the whole cache instead of storing just Ap and p is a bit more costly. Then the tradeoff is, whether this pays off.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"#Define the new functor\ncache_g = CacheG(A, LRU{typeof(p0),typeof(p0)}(; maxsize=25))\n# and cost and gradient that use this functor as\ng4(M,p) = cache_g(Val(:Cost), M, p)\ngrad_g4!(M, X, p) = cache_g(Val(:Gradient), M, X, p)\n@time s4 = gradient_descent(N, g4, grad_g4!, p0;\n stopping_criterion = StopWhenGradientNormLess(1e-5),\n evaluation=InplaceEvaluation(),\n count=[:Cost, :Gradient],\n cache=(:LRU, [:Cost, :Gradient], 25),\n return_objective=true,\n)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":" 0.504801 seconds (519.16 k allocations: 27.890 MiB, 98.94% compilation time)\n\n## Cache\n * :Cost : 25/25 entries of type Float64 used\n * :Gradient : 25/25 entries of type Vector{Float64} used\n\n## Statistics on function calls\n * :Gradient : 602\n * :Cost : 1449\n\nTo access the solver result, call `get_solver_result` on this variable.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"and for safety let’s verify that we are reasonably close","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"p4 = get_solver_result(s4)\ng(N, p4) - f_star","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"1.6049384043981263e-11","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"For this example, or maybe even gradient_descent in general it seems, this additional (second, inner) cache does not improve the result further, it is about the same effort both time and allocation-wise.","category":"page"},{"location":"tutorials/CountAndCache/#Summary","page":"Count and use a cache","title":"Summary","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"While the second approach of ManifoldCostGradientObjective is very easy to implement, both the storage and the (local) cache approach are more efficient. All three are an improvement over the first implementation without sharing interim results. The results with storage or cache have further advantage of being more flexible, since the stored information could also be reused in a third function, for example when also computing the Hessian.","category":"page"},{"location":"tutorials/CountAndCache/#Technical-details","page":"Count and use a cache","title":"Technical details","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"2024-11-21T20:36:20.803","category":"page"},{"location":"tutorials/InplaceGradient/#Speedup-using-in-place-evaluation","page":"Speedup using in-place computations","title":"Speedup using in-place evaluation","text":"","category":"section"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"When it comes to time critical operations, a main ingredient in Julia is given by mutating functions, that is those that compute in place without additional memory allocations. In the following, we illustrate how to do this with Manopt.jl.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"Let’s start with the same function as in Get started: optimize! and compute the mean of some points, only that here we use the sphere mathbb S^30 and n=800 points.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"From the aforementioned example.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"We first load all necessary packages.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"using Manopt, Manifolds, Random, BenchmarkTools\nusing ManifoldDiff: grad_distance, grad_distance!\nRandom.seed!(42);","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"And setup our data","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"Random.seed!(42)\nm = 30\nM = Sphere(m)\nn = 800\nσ = π / 8\np = zeros(Float64, m + 1)\np[2] = 1.0\ndata = [exp(M, p, σ * rand(M; vector_at=p)) for i in 1:n];","category":"page"},{"location":"tutorials/InplaceGradient/#Classical-Definition","page":"Speedup using in-place computations","title":"Classical Definition","text":"","category":"section"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"The variant from the previous tutorial defines a cost f(x) and its gradient operatornamegradf(p) ““”","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"f(M, p) = sum(1 / (2 * n) * distance.(Ref(M), Ref(p), data) .^ 2)\ngrad_f(M, p) = sum(1 / n * grad_distance.(Ref(M), data, Ref(p)))","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"grad_f (generic function with 1 method)","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"We further set the stopping criterion to be a little more strict. Then we obtain","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"sc = StopWhenGradientNormLess(3e-10)\np0 = zeros(Float64, m + 1); p0[1] = 1/sqrt(2); p0[2] = 1/sqrt(2)\nm1 = gradient_descent(M, f, grad_f, p0; stopping_criterion=sc);","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"We can also benchmark this as","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"@benchmark gradient_descent($M, $f, $grad_f, $p0; stopping_criterion=$sc)","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"BenchmarkTools.Trial: 106 samples with 1 evaluation.\n Range (min … max): 46.774 ms … 50.326 ms ┊ GC (min … max): 2.31% … 2.47%\n Time (median): 47.207 ms ┊ GC (median): 2.45%\n Time (mean ± σ): 47.364 ms ± 608.514 μs ┊ GC (mean ± σ): 2.53% ± 0.25%\n\n ▄▇▅▇█▄▇ \n ▅▇▆████████▇▇▅▅▃▁▆▁▁▁▅▁▁▅▁▃▃▁▁▁▁▁▁▁▁▁▁▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅ ▃\n 46.8 ms Histogram: frequency by time 50.2 ms <\n\n Memory estimate: 182.50 MiB, allocs estimate: 615822.","category":"page"},{"location":"tutorials/InplaceGradient/#In-place-Computation-of-the-Gradient","page":"Speedup using in-place computations","title":"In-place Computation of the Gradient","text":"","category":"section"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"We can reduce the memory allocations by implementing the gradient to be evaluated in-place. We do this by using a functor. The motivation is twofold: on one hand, we want to avoid variables from the global scope, for example the manifold M or the data, being used within the function. Considering to do the same for more complicated cost functions might also be worth pursuing.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"Here, we store the data (as reference) and one introduce temporary memory in order to avoid reallocation of memory per grad_distance computation. We get","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"struct GradF!{TD,TTMP}\n data::TD\n tmp::TTMP\nend\nfunction (grad_f!::GradF!)(M, X, p)\n fill!(X, 0)\n for di in grad_f!.data\n grad_distance!(M, grad_f!.tmp, di, p)\n X .+= grad_f!.tmp\n end\n X ./= length(grad_f!.data)\n return X\nend","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"For the actual call to the solver, we first have to generate an instance of GradF! and tell the solver, that the gradient is provided in an InplaceEvaluation. We can further also use gradient_descent! to even work in-place of the initial point we pass.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"grad_f2! = GradF!(data, similar(data[1]))\nm2 = deepcopy(p0)\ngradient_descent!(\n M, f, grad_f2!, m2; evaluation=InplaceEvaluation(), stopping_criterion=sc\n);","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"We can again benchmark this","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"@benchmark gradient_descent!(\n $M, $f, $grad_f2!, m2; evaluation=$(InplaceEvaluation()), stopping_criterion=$sc\n) setup = (m2 = deepcopy($p0))","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"BenchmarkTools.Trial: 176 samples with 1 evaluation.\n Range (min … max): 27.358 ms … 84.206 ms ┊ GC (min … max): 0.00% … 0.00%\n Time (median): 27.768 ms ┊ GC (median): 0.00%\n Time (mean ± σ): 28.504 ms ± 4.338 ms ┊ GC (mean ± σ): 0.60% ± 1.96%\n\n ▂█▇▂ ▂ \n ▆▇████▆█▆▆▄▄▃▄▄▃▃▃▁▃▃▃▃▃▃▃▃▃▄▃▃▃▃▃▃▁▃▁▁▃▁▁▁▁▁▁▃▃▁▁▃▃▁▁▁▁▃▃▃ ▃\n 27.4 ms Histogram: frequency by time 31.4 ms <\n\n Memory estimate: 3.83 MiB, allocs estimate: 5797.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"which is faster by about a factor of 2 compared to the first solver-call. Note that the results m1 and m2 are of course the same.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"distance(M, m1, m2)","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"2.4669338186126805e-17","category":"page"},{"location":"tutorials/InplaceGradient/#Technical-details","page":"Speedup using in-place computations","title":"Technical details","text":"","category":"section"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"Status `~/Repositories/Julia/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.108\n [26cc04aa] FiniteDifferences v0.12.31\n [7073ff75] IJulia v1.24.2\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.10\n [1cead3c2] Manifolds v0.9.18\n [3362f125] ManifoldsBase v0.15.10\n [0fc0a36d] Manopt v0.4.63 `..`\n [91a5bcdd] Plots v1.40.4","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"2024-05-26T13:52:05.613","category":"page"},{"location":"plans/state/#sec-solver-state","page":"Solver State","title":"Solver state","text":"","category":"section"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"Given an AbstractManoptProblem, that is a certain optimisation task, the state specifies the solver to use. It contains the parameters of a solver and all fields necessary during the algorithm, for example the current iterate, a StoppingCriterion or a Stepsize.","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"AbstractManoptSolverState\nget_state\nManopt.get_count","category":"page"},{"location":"plans/state/#Manopt.AbstractManoptSolverState","page":"Solver State","title":"Manopt.AbstractManoptSolverState","text":"AbstractManoptSolverState\n\nA general super type for all solver states.\n\nFields\n\nThe following fields are assumed to be default. If you use different ones, adapt the the access functions get_iterate and get_stopping_criterion accordingly\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.get_state","page":"Solver State","title":"Manopt.get_state","text":"get_state(s::AbstractManoptSolverState, recursive::Bool=true)\n\nreturn the (one step) undecorated AbstractManoptSolverState of the (possibly) decorated s. As long as your decorated state stores the state within s.state and the dispatch_objective_decorator is set to Val{true}, the internal state are extracted automatically.\n\nBy default the state that is stored within a decorated state is assumed to be at s.state. Overwrite _get_state(s, ::Val{true}, recursive) to change this behaviour for your states` for both the recursive and the direct case.\n\nIf recursive is set to false, only the most outer decorator is taken away instead of all.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.get_count","page":"Solver State","title":"Manopt.get_count","text":"get_count(ams::AbstractManoptSolverState, ::Symbol)\n\nObtain the count for a certain countable size, for example the :Iterations. This function returns 0 if there was nothing to count\n\nAvailable symbols from within the solver state\n\n:Iterations is passed on to the stop field to obtain the iteration at which the solver stopped.\n\n\n\n\n\nget_count(co::ManifoldCountObjective, s::Symbol, mode::Symbol=:None)\n\nGet the number of counts for a certain symbol s.\n\nDepending on the mode different results appear if the symbol does not exist in the dictionary\n\n:None: (default) silent mode, returns -1 for non-existing entries\n:warn: issues a warning if a field does not exist\n:error: issues an error if a field does not exist\n\n\n\n\n\n","category":"function"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"Since every subtype of an AbstractManoptSolverState directly relate to a solver, the concrete states are documented together with the corresponding solvers. This page documents the general features available for every state.","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"A first example is to obtain or set, the current iterate. This might be useful to continue investigation at the current iterate, or to set up a solver for a next experiment, respectively.","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"get_iterate\nset_iterate!\nget_gradient(s::AbstractManoptSolverState)\nset_gradient!","category":"page"},{"location":"plans/state/#Manopt.get_iterate","page":"Solver State","title":"Manopt.get_iterate","text":"get_iterate(O::AbstractManoptSolverState)\n\nreturn the (last stored) iterate within AbstractManoptSolverStates`. This should usually refer to a single point on the manifold the solver is working on\n\nBy default this also removes all decorators of the state beforehand.\n\n\n\n\n\nget_iterate(agst::AbstractGradientSolverState)\n\nreturn the iterate stored within gradient options. THe default returns agst.p.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.set_iterate!","page":"Solver State","title":"Manopt.set_iterate!","text":"set_iterate!(s::AbstractManoptSolverState, M::AbstractManifold, p)\n\nset the iterate within an AbstractManoptSolverState to some (start) value p.\n\n\n\n\n\nset_iterate!(agst::AbstractGradientSolverState, M, p)\n\nset the (current) iterate stored within an AbstractGradientSolverState to p. The default function modifies s.p.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.get_gradient-Tuple{AbstractManoptSolverState}","page":"Solver State","title":"Manopt.get_gradient","text":"get_gradient(s::AbstractManoptSolverState)\n\nreturn the (last stored) gradient within AbstractManoptSolverStates`. By default also undecorates the state beforehand\n\n\n\n\n\n","category":"method"},{"location":"plans/state/#Manopt.set_gradient!","page":"Solver State","title":"Manopt.set_gradient!","text":"set_gradient!(s::AbstractManoptSolverState, M::AbstractManifold, p, X)\n\nset the gradient within an (possibly decorated) AbstractManoptSolverState to some (start) value X in the tangent space at p.\n\n\n\n\n\nset_gradient!(agst::AbstractGradientSolverState, M, p, X)\n\nset the (current) gradient stored within an AbstractGradientSolverState to X. The default function modifies s.X.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"An internal function working on the state and elements within a state is used to pass messages from (sub) activities of a state to the corresponding DebugMessages","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"get_message","category":"page"},{"location":"plans/state/#Manopt.get_message","page":"Solver State","title":"Manopt.get_message","text":"get_message(du::AbstractManoptSolverState)\n\nget a message (String) from internal functors, in a summary. This should return any message a sub-step might have issued as well.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"Furthermore, to access the stopping criterion use","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"get_stopping_criterion","category":"page"},{"location":"plans/state/#Manopt.get_stopping_criterion","page":"Solver State","title":"Manopt.get_stopping_criterion","text":"get_stopping_criterion(ams::AbstractManoptSolverState)\n\nReturn the StoppingCriterion stored within the AbstractManoptSolverState ams.\n\nFor an undecorated state, this is assumed to be in ams.stop. Overwrite _get_stopping_criterion(yms::YMS) to change this for your manopt solver (yms) assuming it has type YMS`.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Decorators-for-AbstractManoptSolverStates","page":"Solver State","title":"Decorators for AbstractManoptSolverStates","text":"","category":"section"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"A solver state can be decorated using the following trait and function to initialize","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"dispatch_state_decorator\nis_state_decorator\ndecorate_state!","category":"page"},{"location":"plans/state/#Manopt.dispatch_state_decorator","page":"Solver State","title":"Manopt.dispatch_state_decorator","text":"dispatch_state_decorator(s::AbstractManoptSolverState)\n\nIndicate internally, whether an AbstractManoptSolverState s is of decorating type, and stores (encapsulates) a state in itself, by default in the field s.state.\n\nDecorators indicate this by returning Val{true} for further dispatch.\n\nThe default is Val{false}, so by default a state is not decorated.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.is_state_decorator","page":"Solver State","title":"Manopt.is_state_decorator","text":"is_state_decorator(s::AbstractManoptSolverState)\n\nIndicate, whether AbstractManoptSolverState s are of decorator type.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.decorate_state!","page":"Solver State","title":"Manopt.decorate_state!","text":"decorate_state!(s::AbstractManoptSolverState)\n\ndecorate the AbstractManoptSolverStates with specific decorators.\n\nOptional arguments\n\noptional arguments provide necessary details on the decorators.\n\ndebug=Array{Union{Symbol,DebugAction,String,Int},1}(): a set of symbols representing DebugActions, Strings used as dividers and a sub-sampling integer. These are passed as a DebugGroup within :Iteration to the DebugSolverState decorator dictionary. Only exception is :Stop that is passed to :Stop.\nrecord=Array{Union{Symbol,RecordAction,Int},1}(): specify recordings by using Symbols or RecordActions directly. An integer can again be used for only recording every ith iteration.\nreturn_state=false: indicate whether to wrap the options in a ReturnSolverState, indicating that the solver should return options and not (only) the minimizer.\n\nother keywords are ignored.\n\nSee also\n\nDebugSolverState, RecordSolverState, ReturnSolverState\n\n\n\n\n\n","category":"function"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"A simple example is the","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"ReturnSolverState","category":"page"},{"location":"plans/state/#Manopt.ReturnSolverState","page":"Solver State","title":"Manopt.ReturnSolverState","text":"ReturnSolverState{O<:AbstractManoptSolverState} <: AbstractManoptSolverState\n\nThis internal type is used to indicate that the contained AbstractManoptSolverState state should be returned at the end of a solver instead of the usual minimizer.\n\nSee also\n\nget_solver_result\n\n\n\n\n\n","category":"type"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"as well as DebugSolverState and RecordSolverState.","category":"page"},{"location":"plans/state/#State-actions","page":"Solver State","title":"State actions","text":"","category":"section"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"A state action is a struct for callback functions that can be attached within for example the just mentioned debug decorator or the record decorator.","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"AbstractStateAction","category":"page"},{"location":"plans/state/#Manopt.AbstractStateAction","page":"Solver State","title":"Manopt.AbstractStateAction","text":"AbstractStateAction\n\na common Type for AbstractStateActions that might be triggered in decorators, for example within the DebugSolverState or within the RecordSolverState.\n\n\n\n\n\n","category":"type"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"Several state decorators or actions might store intermediate values like the (last) iterate to compute some change or the last gradient. In order to minimise the storage of these, there is a generic StoreStateAction that acts as generic common storage that can be shared among different actions.","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"StoreStateAction\nget_storage\nhas_storage\nupdate_storage!\nPointStorageKey\nVectorStorageKey","category":"page"},{"location":"plans/state/#Manopt.StoreStateAction","page":"Solver State","title":"Manopt.StoreStateAction","text":"StoreStateAction <: AbstractStateAction\n\ninternal storage for AbstractStateActions to store a tuple of fields from an AbstractManoptSolverStates\n\nThis functor possesses the usual interface of functions called during an iteration and acts on (p, s, k), where p is a AbstractManoptProblem, s is an AbstractManoptSolverState and k is the current iteration.\n\nFields\n\nvalues: a dictionary to store interim values based on certain Symbols\nkeys: a Vector of Symbols to refer to fields of AbstractManoptSolverState\npoint_values: a NamedTuple of mutable values of points on a manifold to be stored in StoreStateAction. Manifold is later determined by AbstractManoptProblem passed to update_storage!.\npoint_init: a NamedTuple of boolean values indicating whether a point in point_values with matching key has been already initialized to a value. When it is false, it corresponds to a general value not being stored for the key present in the vector keys.\nvector_values: a NamedTuple of mutable values of tangent vectors on a manifold to be stored in StoreStateAction. Manifold is later determined by AbstractManoptProblem passed to update_storage!. It is not specified at which point the vectors are tangent but for storage it should not matter.\nvector_init: a NamedTuple of boolean values indicating whether a tangent vector in vector_values: with matching key has been already initialized to a value. When it is false, it corresponds to a general value not being stored for the key present in the vector keys.\nonce: whether to update the internal values only once per iteration\nlastStored: last iterate, where this AbstractStateAction was called (to determine once)\n\nTo handle the general storage, use get_storage and has_storage with keys as Symbols. For the point storage use PointStorageKey. For tangent vector storage use VectorStorageKey. Point and tangent storage have been optimized to be more efficient.\n\nConstructors\n\nStoreStateAction(s::Vector{Symbol})\n\nThis is equivalent as providing s to the keyword store_fields, just that here, no manifold is necessity for the construction.\n\nStoreStateAction(M)\n\nKeyword arguments\n\nstore_fields (Symbol[])\nstore_points (Symbol[])\nstore_vectors (Symbol[])\n\nas vectors of symbols each referring to fields of the state (lower case symbols) or semantic ones (upper case).\n\np_init (rand(M)) but making sure this is not a number but a (mutatable) array\nX_init (zero_vector(M, p_init))\n\nare used to initialize the point and vector storage, change these if you use other types (than the default) for your points/vectors on M.\n\nonce (true) whether to update internal storage only once per iteration or on every update call\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.get_storage","page":"Solver State","title":"Manopt.get_storage","text":"get_storage(a::AbstractStateAction, key::Symbol)\n\nReturn the internal value of the AbstractStateAction a at the Symbol key.\n\n\n\n\n\nget_storage(a::AbstractStateAction, ::PointStorageKey{key}) where {key}\n\nReturn the internal value of the AbstractStateAction a at the Symbol key that represents a point.\n\n\n\n\n\nget_storage(a::AbstractStateAction, ::VectorStorageKey{key}) where {key}\n\nReturn the internal value of the AbstractStateAction a at the Symbol key that represents a vector.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.has_storage","page":"Solver State","title":"Manopt.has_storage","text":"has_storage(a::AbstractStateAction, key::Symbol)\n\nReturn whether the AbstractStateAction a has a value stored at the Symbol key.\n\n\n\n\n\nhas_storage(a::AbstractStateAction, ::PointStorageKey{key}) where {key}\n\nReturn whether the AbstractStateAction a has a point value stored at the Symbol key.\n\n\n\n\n\nhas_storage(a::AbstractStateAction, ::VectorStorageKey{key}) where {key}\n\nReturn whether the AbstractStateAction a has a point value stored at the Symbol key.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.update_storage!","page":"Solver State","title":"Manopt.update_storage!","text":"update_storage!(a::AbstractStateAction, amp::AbstractManoptProblem, s::AbstractManoptSolverState)\n\nUpdate the AbstractStateAction a internal values to the ones given on the AbstractManoptSolverState s. Optimized using the information from amp\n\n\n\n\n\nupdate_storage!(a::AbstractStateAction, d::Dict{Symbol,<:Any})\n\nUpdate the AbstractStateAction a internal values to the ones given in the dictionary d. The values are merged, where the values from d are preferred.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.PointStorageKey","page":"Solver State","title":"Manopt.PointStorageKey","text":"struct PointStorageKey{key} end\n\nRefer to point storage of StoreStateAction in get_storage and has_storage functions\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.VectorStorageKey","page":"Solver State","title":"Manopt.VectorStorageKey","text":"struct VectorStorageKey{key} end\n\nRefer to tangent storage of StoreStateAction in get_storage and has_storage functions\n\n\n\n\n\n","category":"type"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"as well as two internal functions","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"_storage_copy_vector\n_storage_copy_point","category":"page"},{"location":"plans/state/#Manopt._storage_copy_vector","page":"Solver State","title":"Manopt._storage_copy_vector","text":"_storage_copy_vector(M::AbstractManifold, X)\n\nMake a copy of tangent vector X from manifold M for storage in StoreStateAction.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt._storage_copy_point","page":"Solver State","title":"Manopt._storage_copy_point","text":"_storage_copy_point(M::AbstractManifold, p)\n\nMake a copy of point p from manifold M for storage in StoreStateAction.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Abstract-states","page":"Solver State","title":"Abstract states","text":"","category":"section"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"In a few cases it is useful to have a hierarchy of types. These are","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"AbstractSubProblemSolverState\nAbstractGradientSolverState\nAbstractHessianSolverState\nAbstractPrimalDualSolverState","category":"page"},{"location":"plans/state/#Manopt.AbstractSubProblemSolverState","page":"Solver State","title":"Manopt.AbstractSubProblemSolverState","text":"AbstractSubProblemSolverState <: AbstractManoptSolverState\n\nAn abstract type for solvers that involve a subsolver.\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.AbstractGradientSolverState","page":"Solver State","title":"Manopt.AbstractGradientSolverState","text":"AbstractGradientSolverState <: AbstractManoptSolverState\n\nA generic AbstractManoptSolverState type for gradient based options data.\n\nIt assumes that\n\nthe iterate is stored in the field p\nthe gradient at p is stored in X.\n\nSee also\n\nGradientDescentState, StochasticGradientDescentState, SubGradientMethodState, QuasiNewtonState.\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.AbstractHessianSolverState","page":"Solver State","title":"Manopt.AbstractHessianSolverState","text":"AbstractHessianSolverState <: AbstractGradientSolverState\n\nAn AbstractManoptSolverState type to represent algorithms that employ the Hessian. These options are assumed to have a field (gradient) to store the current gradient operatornamegradf(x)\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.AbstractPrimalDualSolverState","page":"Solver State","title":"Manopt.AbstractPrimalDualSolverState","text":"AbstractPrimalDualSolverState\n\nA general type for all primal dual based options to be used within primal dual based algorithms\n\n\n\n\n\n","category":"type"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"For the sub problem state, there are two access functions","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"get_sub_problem\nget_sub_state","category":"page"},{"location":"plans/state/#Manopt.get_sub_problem","page":"Solver State","title":"Manopt.get_sub_problem","text":"get_sub_problem(ams::AbstractSubProblemSolverState)\n\nAccess the sub problem of a solver state that involves a sub optimisation task. By default this returns ams.sub_problem.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.get_sub_state","page":"Solver State","title":"Manopt.get_sub_state","text":"get_sub_state(ams::AbstractSubProblemSolverState)\n\nAccess the sub state of a solver state that involves a sub optimisation task. By default this returns ams.sub_state.\n\n\n\n\n\n","category":"function"},{"location":"about/#About","page":"About","title":"About","text":"","category":"section"},{"location":"about/","page":"About","title":"About","text":"Manopt.jl inherited its name from Manopt, a Matlab toolbox for optimization on manifolds. This Julia package was started and is currently maintained by Ronny Bergmann.","category":"page"},{"location":"about/#Contributors","page":"About","title":"Contributors","text":"","category":"section"},{"location":"about/","page":"About","title":"About","text":"Thanks to the following contributors to Manopt.jl:","category":"page"},{"location":"about/","page":"About","title":"About","text":"Constantin Ahlmann-Eltze implemented the gradient and differential check functions\nRenée Dornig implemented the particle swarm, the Riemannian Augmented Lagrangian Method, the Exact Penalty Method, as well as the NonmonotoneLinesearch. These solvers are also the first one with modular/exchangable sub solvers.\nWillem Diepeveen implemented the primal-dual Riemannian semismooth Newton solver.\nHajg Jasa implemented the convex bundle method and the proximal bundle method and a default subsolver each of them.\nEven Stephansen Kjemsås contributed to the implementation of the Frank Wolfe Method solver.\nMathias Ravn Munkvold contributed most of the implementation of the Adaptive Regularization with Cubics solver as well as its Lanczos subsolver\nTom-Christian Riemer implemented the trust regions and quasi Newton solvers as well as the truncated conjugate gradient descent subsolver.\nMarkus A. Stokkenes contributed most of the implementation of the Interior Point Newton Method as well as its default Conjugate Residual subsolver\nManuel Weiss implemented most of the conjugate gradient update rules","category":"page"},{"location":"about/","page":"About","title":"About","text":"as well as various contributors providing small extensions, finding small bugs and mistakes and fixing them by opening PRs. Thanks to all of you.","category":"page"},{"location":"about/","page":"About","title":"About","text":"If you want to contribute a manifold or algorithm or have any questions, visit the GitHub repository to clone/fork the repository or open an issue.","category":"page"},{"location":"about/#Work-using-Manopt.jl","page":"About","title":"Work using Manopt.jl","text":"","category":"section"},{"location":"about/","page":"About","title":"About","text":"ExponentialFamilyProjection.jl package uses Manopt.jl to project arbitrary functions onto the closest exponential family distributions. The package also integrates with RxInfer.jl to enable Bayesian inference in a larger set of probabilistic models.\nCaesar.jl within non-Gaussian factor graph inference algorithms","category":"page"},{"location":"about/","page":"About","title":"About","text":"Is a package missing? Open an issue! It would be great to collect anything and anyone using Manopt.jl","category":"page"},{"location":"about/#Further-packages","page":"About","title":"Further packages","text":"","category":"section"},{"location":"about/","page":"About","title":"About","text":"Manopt.jl belongs to the Manopt family:","category":"page"},{"location":"about/","page":"About","title":"About","text":"manopt.org The Matlab version of Manopt, see also their :octocat: GitHub repository\npymanopt.org The Python version of Manopt providing also several AD backends, see also their :octocat: GitHub repository","category":"page"},{"location":"about/","page":"About","title":"About","text":"but there are also more packages providing tools on manifolds in other languages","category":"page"},{"location":"about/","page":"About","title":"About","text":"Jax Geometry (Python/Jax) for differential geometry and stochastic dynamics with deep learning\nGeomstats (Python with several backends) focusing on statistics and machine learning :octocat: GitHub repository\nGeoopt (Python & PyTorch) Riemannian ADAM & SGD. :octocat: GitHub repository\nMcTorch (Python & PyToch) Riemannian SGD, Adagrad, ASA & CG.\nROPTLIB (C++) a Riemannian OPTimization LIBrary :octocat: GitHub repository\nTF Riemopt (Python & TensorFlow) Riemannian optimization using TensorFlow","category":"page"},{"location":"tutorials/GeodesicRegression/#How-to-perform-Geodesic-Regression","page":"Do geodesic regression","title":"How to perform Geodesic Regression","text":"","category":"section"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Geodesic regression generalizes linear regression to Riemannian manifolds. Let’s first phrase it informally as follows:","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For given data points d_1ldotsd_n on a Riemannian manifold mathcal M, find the geodesic that “best explains” the data.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"The meaning of “best explain” has still to be clarified. We distinguish two cases: time labelled data and unlabelled data","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":" using Manopt, ManifoldDiff, Manifolds, Random, Colors\n using LinearAlgebra: svd\n Random.seed!(42);","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"We use the following data, where we want to highlight one of the points.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"n = 7\nσ = π / 8\nS = Sphere(2)\nbase = 1 / sqrt(2) * [1.0, 0.0, 1.0]\ndir = [-0.75, 0.5, 0.75]\ndata_orig = [exp(S, base, dir, t) for t in range(-0.5, 0.5; length=n)]\n# add noise to the points on the geodesic\ndata = map(p -> exp(S, p, rand(S; vector_at=p, σ=σ)), data_orig)\nhighlighted = 4;","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"(Image: The given data)","category":"page"},{"location":"tutorials/GeodesicRegression/#Time-Labeled-Data","page":"Do geodesic regression","title":"Time Labeled Data","text":"","category":"section"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"If for each data item d_i we are also given a time point t_iinmathbb R, which are pairwise different, then we can use the least squares error to state the objective function as [Fle13]","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"F(pX) = frac12sum_i=1^n d_mathcal M^2(γ_pX(t_i) d_i)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"where d_mathcal M is the Riemannian distance and γ_pX is the geodesic with γ(0) = p and dotgamma(0) = X.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For the real-valued case mathcal M = mathbb R^m the solution (p^* X^*) is given in closed form as follows: with d^* = frac1ndisplaystylesum_i=1^nd_i and t^* = frac1ndisplaystylesum_i=1^n t_i we get","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":" X^* = fracsum_i=1^n (d_i-d^*)(t-t^*)sum_i=1^n (t_i-t^*)^2\nquadtext and quad\np^* = d^* - t^*X^*","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"and hence the linear regression result is the line γ_p^*X^*(t) = p^* + tX^*.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"On a Riemannian manifold we can phrase this as an optimization problem on the tangent bundle, which is the disjoint union of all tangent spaces, as","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"operatorname*argmin_(pX) in mathrmTmathcal M F(pX)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Due to linearity, the gradient of F(pX) is the sum of the single gradients of","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":" frac12d_mathcal M^2bigl(γ_pX(t_i)d_ibigr)\n = frac12d_mathcal M^2bigl(exp_p(t_iX)d_ibigr)\n quad i1ldotsn","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"which can be computed using a chain rule of the squared distance and the exponential map, see for example [BG18] for details or Equations (7) and (8) of [Fle13]:","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"M = TangentBundle(S)\nstruct RegressionCost{T,S}\n data::T\n times::S\nend\nRegressionCost(data::T, times::S) where {T,S} = RegressionCost{T,S}(data, times)\nfunction (a::RegressionCost)(M, x)\n pts = [geodesic(M.manifold, x[M, :point], x[M, :vector], ti) for ti in a.times]\n return 1 / 2 * sum(distance.(Ref(M.manifold), pts, a.data) .^ 2)\nend\nstruct RegressionGradient!{T,S}\n data::T\n times::S\nend\nfunction RegressionGradient!(data::T, times::S) where {T,S}\n return RegressionGradient!{T,S}(data, times)\nend\nfunction (a::RegressionGradient!)(M, Y, x)\n pts = [geodesic(M.manifold, x[M, :point], x[M, :vector], ti) for ti in a.times]\n gradients = grad_distance.(Ref(M.manifold), a.data, pts)\n Y[M, :point] .= sum(\n ManifoldDiff.adjoint_differential_exp_basepoint.(\n Ref(M.manifold),\n Ref(x[M, :point]),\n [ti * x[M, :vector] for ti in a.times],\n gradients,\n ),\n )\n Y[M, :vector] .= sum(\n ManifoldDiff.adjoint_differential_exp_argument.(\n Ref(M.manifold),\n Ref(x[M, :point]),\n [ti * x[M, :vector] for ti in a.times],\n gradients,\n ),\n )\n return Y\nend","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For the Euclidean case, the result is given by the first principal component of a principal component analysis, see PCR which is given by p^* = frac1ndisplaystylesum_i=1^n d_i and the direction X^* is obtained by defining the zero mean data matrix","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"D = bigl(d_1-p^* ldots d_n-p^*bigr) in mathbb R^mn","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"and taking X^* as an eigenvector to the largest eigenvalue of D^mathrmTD.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"We can do something similar, when considering the tangent space at the (Riemannian) mean of the data and then do a PCA on the coordinate coefficients with respect to a basis.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"m = mean(S, data)\nA = hcat(\n map(x -> get_coordinates(S, m, log(S, m, x), DefaultOrthonormalBasis()), data)...\n)\npca1 = get_vector(S, m, svd(A).U[:, 1], DefaultOrthonormalBasis())\nx0 = ArrayPartition(m, pca1)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"([0.6998621681746481, -0.013681674945026638, 0.7141468737791822], [0.5931302057517893, -0.5459465115717783, -0.5917254139611094])","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"The optimal “time labels” are then just the projections t_i = d_iX^*, i=1ldotsn.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"t = map(d -> inner(S, m, pca1, log(S, m, d)), data)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"7-element Vector{Float64}:\n 1.0763904949888323\n 0.4594060193318443\n -0.5030195874833682\n 0.02135686940521725\n -0.6158692507563633\n -0.24431652575028764\n -0.2259012492666664","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"And we can call the gradient descent. Note that since gradF! works in place of Y, we have to set the evaluation type accordingly.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"y = gradient_descent(\n M,\n RegressionCost(data, t),\n RegressionGradient!(data, t),\n x0;\n evaluation=InplaceEvaluation(),\n stepsize=ArmijoLinesearch(\n M;\n initial_stepsize=1.0,\n contraction_factor=0.990,\n sufficient_decrease=0.05,\n stop_when_stepsize_less=1e-9,\n ),\n stopping_criterion=StopAfterIteration(200) |\n StopWhenGradientNormLess(1e-8) |\n StopWhenStepsizeLess(1e-9),\n debug=[:Iteration, \" | \", :Cost, \"\\n\", :Stop, 50],\n)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Initial | f(x): 0.142862\n# 50 | f(x): 0.141113\n# 100 | f(x): 0.141113\n# 150 | f(x): 0.141113\n# 200 | f(x): 0.141113\nThe algorithm reached its maximal number of iterations (200).\n\n([0.7119768725361988, 0.009463059143003981, 0.7021391482357537], [0.590008151835008, -0.5543272518659472, -0.5908038715512287])","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For the result, we can generate and plot all involved geodesics","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"dense_t = range(-0.5, 0.5; length=100)\ngeo = geodesic(S, y[M, :point], y[M, :vector], dense_t)\ninit_geo = geodesic(S, x0[M, :point], x0[M, :vector], dense_t)\ngeo_pts = geodesic(S, y[M, :point], y[M, :vector], t)\ngeo_conn_highlighted = shortest_geodesic(\n S, data[highlighted], geo_pts[highlighted], 0.5 .+ dense_t\n);","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"(Image: Result of Geodesic Regression)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"In this image, together with the blue data points, you see the geodesic of the initialization in black (evaluated on -frac12frac12), the final point on the tangent bundle in orange, as well as the resulting regression geodesic in teal, (on the same interval as the start) as well as small teal points indicating the time points on the geodesic corresponding to the data. Additionally, a thin blue line indicates the geodesic between a data point and its corresponding data point on the geodesic. While this would be the closest point in Euclidean space and hence the two directions (along the geodesic vs. to the data point) orthogonal, here we have","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"inner(\n S,\n geo_pts[highlighted],\n log(S, geo_pts[highlighted], geo_pts[highlighted + 1]),\n log(S, geo_pts[highlighted], data[highlighted]),\n)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"0.002487393068917863","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"But we also started with one of the best scenarios of equally spaced points on a geodesic obstructed by noise.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"This gets worse if you start with less evenly distributed data","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"data2 = [exp(S, base, dir, t) for t in [-0.5, -0.49, -0.48, 0.1, 0.48, 0.49, 0.5]]\ndata2 = map(p -> exp(S, p, rand(S; vector_at=p, σ=σ / 2)), data2)\nm2 = mean(S, data2)\nA2 = hcat(\n map(x -> get_coordinates(S, m, log(S, m, x), DefaultOrthonormalBasis()), data2)...\n)\npca2 = get_vector(S, m, svd(A2).U[:, 1], DefaultOrthonormalBasis())\nx1 = ArrayPartition(m, pca2)\nt2 = map(d -> inner(S, m2, pca2, log(S, m2, d)), data2)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"7-element Vector{Float64}:\n 0.8226008307680276\n 0.470952643700004\n 0.7974195537403082\n 0.01533949241264346\n -0.6546705405852389\n -0.8913273825362389\n -0.5775954445730889","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"then we run again","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"y2 = gradient_descent(\n M,\n RegressionCost(data2, t2),\n RegressionGradient!(data2, t2),\n x1;\n evaluation=InplaceEvaluation(),\n stepsize=ArmijoLinesearch(\n M;\n initial_stepsize=1.0,\n contraction_factor=0.990,\n sufficient_decrease=0.05,\n stop_when_stepsize_less=1e-9,\n ),\n stopping_criterion=StopAfterIteration(200) |\n StopWhenGradientNormLess(1e-8) |\n StopWhenStepsizeLess(1e-9),\n debug=[:Iteration, \" | \", :Cost, \"\\n\", :Stop, 3],\n);","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Initial | f(x): 0.089844\n# 3 | f(x): 0.085364\n# 6 | f(x): 0.085364\n# 9 | f(x): 0.085364\n# 12 | f(x): 0.085364\n# 15 | f(x): 0.085364\n# 18 | f(x): 0.085364\n# 21 | f(x): 0.085364\n# 24 | f(x): 0.085364\n# 27 | f(x): 0.085364\n# 30 | f(x): 0.085364\n# 33 | f(x): 0.085364\n# 36 | f(x): 0.085364\n# 39 | f(x): 0.085364\n# 42 | f(x): 0.085364\n# 45 | f(x): 0.085364\n# 48 | f(x): 0.085364\n# 51 | f(x): 0.085364\n# 54 | f(x): 0.085364\n# 57 | f(x): 0.085364\n# 60 | f(x): 0.085364\n# 63 | f(x): 0.085364\n# 66 | f(x): 0.085364\n# 69 | f(x): 0.085364\n# 72 | f(x): 0.085364\n# 75 | f(x): 0.085364\n# 78 | f(x): 0.085364\n# 81 | f(x): 0.085364\n# 84 | f(x): 0.085364\n# 87 | f(x): 0.085364\n# 90 | f(x): 0.085364\n# 93 | f(x): 0.085364\n# 96 | f(x): 0.085364\n# 99 | f(x): 0.085364\n# 102 | f(x): 0.085364\n# 105 | f(x): 0.085364\n# 108 | f(x): 0.085364\n# 111 | f(x): 0.085364\n# 114 | f(x): 0.085364\n# 117 | f(x): 0.085364\n# 120 | f(x): 0.085364\n# 123 | f(x): 0.085364\n# 126 | f(x): 0.085364\n# 129 | f(x): 0.085364\n# 132 | f(x): 0.085364\n# 135 | f(x): 0.085364\n# 138 | f(x): 0.085364\n# 141 | f(x): 0.085364\n# 144 | f(x): 0.085364\n# 147 | f(x): 0.085364\n# 150 | f(x): 0.085364\n# 153 | f(x): 0.085364\n# 156 | f(x): 0.085364\n# 159 | f(x): 0.085364\n# 162 | f(x): 0.085364\n# 165 | f(x): 0.085364\n# 168 | f(x): 0.085364\n# 171 | f(x): 0.085364\n# 174 | f(x): 0.085364\n# 177 | f(x): 0.085364\n# 180 | f(x): 0.085364\n# 183 | f(x): 0.085364\n# 186 | f(x): 0.085364\n# 189 | f(x): 0.085364\n# 192 | f(x): 0.085364\n# 195 | f(x): 0.085364\n# 198 | f(x): 0.085364\nThe algorithm reached its maximal number of iterations (200).","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For plotting we again generate all data","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"geo2 = geodesic(S, y2[M, :point], y2[M, :vector], dense_t)\ninit_geo2 = geodesic(S, x1[M, :point], x1[M, :vector], dense_t)\ngeo_pts2 = geodesic(S, y2[M, :point], y2[M, :vector], t2)\ngeo_conn_highlighted2 = shortest_geodesic(\n S, data2[highlighted], geo_pts2[highlighted], 0.5 .+ dense_t\n);","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"(Image: A second result with different time points)","category":"page"},{"location":"tutorials/GeodesicRegression/#Unlabeled-Data","page":"Do geodesic regression","title":"Unlabeled Data","text":"","category":"section"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"If we are not given time points t_i, then the optimization problem extends, informally speaking, to also finding the “best fitting” (in the sense of smallest error). To formalize, the objective function here reads","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"F(p X t) = frac12sum_i=1^n d_mathcal M^2(γ_pX(t_i) d_i)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"where t = (t_1ldotst_n) in mathbb R^n is now an additional parameter of the objective function. We write F_1(p X) to refer to the function on the tangent bundle for fixed values of t (as the one in the last part) and F_2(t) for the function F(p X t) as a function in t with fixed values (p X).","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For the Euclidean case, there is no necessity to optimize with respect to t, as we saw above for the initialization of the fixed time points.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"On a Riemannian manifold this can be stated as a problem on the product manifold mathcal N = mathrmTmathcal M times mathbb R^n, i.e.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"N = M × Euclidean(length(t2))","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"ProductManifold with 2 submanifolds:\n TangentBundle(Sphere(2, ℝ))\n Euclidean(7; field=ℝ)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":" operatorname*argmin_bigl((pX)tbigr)inmathcal N F(p X t)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"In this tutorial we present an approach to solve this using an alternating gradient descent scheme. To be precise, we define the cost function now on the product manifold","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"struct RegressionCost2{T}\n data::T\nend\nRegressionCost2(data::T) where {T} = RegressionCost2{T}(data)\nfunction (a::RegressionCost2)(N, x)\n TM = N[1]\n pts = [\n geodesic(TM.manifold, x[N, 1][TM, :point], x[N, 1][TM, :vector], ti) for\n ti in x[N, 2]\n ]\n return 1 / 2 * sum(distance.(Ref(TM.manifold), pts, a.data) .^ 2)\nend","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"The gradient in two parts, namely (a) the same gradient as before w.r.t. (pX) Tmathcal M, just now with a fixed t in mind for the second component of the product manifold mathcal N","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"struct RegressionGradient2a!{T}\n data::T\nend\nRegressionGradient2a!(data::T) where {T} = RegressionGradient2a!{T}(data)\nfunction (a::RegressionGradient2a!)(N, Y, x)\n TM = N[1]\n p = x[N, 1]\n pts = [geodesic(TM.manifold, p[TM, :point], p[TM, :vector], ti) for ti in x[N, 2]]\n gradients = Manopt.grad_distance.(Ref(TM.manifold), a.data, pts)\n Y[TM, :point] .= sum(\n ManifoldDiff.adjoint_differential_exp_basepoint.(\n Ref(TM.manifold),\n Ref(p[TM, :point]),\n [ti * p[TM, :vector] for ti in x[N, 2]],\n gradients,\n ),\n )\n Y[TM, :vector] .= sum(\n ManifoldDiff.adjoint_differential_exp_argument.(\n Ref(TM.manifold),\n Ref(p[TM, :point]),\n [ti * p[TM, :vector] for ti in x[N, 2]],\n gradients,\n ),\n )\n return Y\nend","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Finally, we additionally look for a fixed point x=(pX) mathrmTmathcal M at the gradient with respect to tmathbb R^n, the second component, which is given by","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":" (operatornamegradF_2(t))_i\n = - dot γ_pX(t_i) log_γ_pX(t_i)d_i_γ_pX(t_i) i = 1 ldots n","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"struct RegressionGradient2b!{T}\n data::T\nend\nRegressionGradient2b!(data::T) where {T} = RegressionGradient2b!{T}(data)\nfunction (a::RegressionGradient2b!)(N, Y, x)\n TM = N[1]\n p = x[N, 1]\n pts = [geodesic(TM.manifold, p[TM, :point], p[TM, :vector], ti) for ti in x[N, 2]]\n logs = log.(Ref(TM.manifold), pts, a.data)\n pt = map(\n d -> vector_transport_to(TM.manifold, p[TM, :point], p[TM, :vector], d), pts\n )\n Y .= -inner.(Ref(TM.manifold), pts, logs, pt)\n return Y\nend","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"We can reuse the computed initial values from before, just that now we are on a product manifold","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"x2 = ArrayPartition(x1, t2)\nF3 = RegressionCost2(data2)\ngradF3_vector = [RegressionGradient2a!(data2), RegressionGradient2b!(data2)];","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"and we run the algorithm","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"y3 = alternating_gradient_descent(\n N,\n F3,\n gradF3_vector,\n x2;\n evaluation=InplaceEvaluation(),\n debug=[:Iteration, \" | \", :Cost, \"\\n\", :Stop, 50],\n stepsize=ArmijoLinesearch(\n M;\n contraction_factor=0.999,\n sufficient_decrease=0.066,\n stop_when_stepsize_less=1e-11,\n retraction_method=ProductRetraction(SasakiRetraction(2), ExponentialRetraction()),\n ),\n inner_iterations=1,\n)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Initial | f(x): 0.089844\n# 50 | f(x): 0.091097\n# 100 | f(x): 0.091097\nThe algorithm reached its maximal number of iterations (100).\n\n(ArrayPartition{Float64, Tuple{Vector{Float64}, Vector{Float64}}}(([0.750222090700214, 0.031464227399200885, 0.6604368380243274], [0.6636489079535082, -0.3497538263293046, -0.737208025444054])), [0.7965909273713889, 0.43402264218923514, 0.755822122896529, 0.001059348203453764, -0.6421135044471217, -0.8635572995105818, -0.5546338813212247])","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"which we render can collect into an image creating the geodesics again","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"geo3 = geodesic(S, y3[N, 1][M, :point], y3[N, 1][M, :vector], dense_t)\ninit_geo3 = geodesic(S, x1[M, :point], x1[M, :vector], dense_t)\ngeo_pts3 = geodesic(S, y3[N, 1][M, :point], y3[N, 1][M, :vector], y3[N, 2])\nt3 = y3[N, 2]\ngeo_conns = shortest_geodesic.(Ref(S), data2, geo_pts3, Ref(0.5 .+ 4*dense_t));","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"which yields","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"(Image: The third result)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Note that the geodesics from the data to the regression geodesic meet at a nearly orthogonal angle.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Acknowledgement. Parts of this tutorial are based on the bachelor thesis of Jeremias Arf.","category":"page"},{"location":"tutorials/GeodesicRegression/#Literature","page":"Do geodesic regression","title":"Literature","text":"","category":"section"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"R. Bergmann and P.-Y. Gousenbourger. A variational model for data fitting on manifolds by minimizing the acceleration of a Bézier curve. Frontiers in Applied Mathematics and Statistics 4 (2018), arXiv:1807.10090.\n\n\n\nP. T. Fletcher. Geodesic regression and the theory of least squares on Riemannian manifolds. International Journal of Computer Vision 105, 171–185 (2013).\n\n\n\n","category":"page"},{"location":"solvers/FrankWolfe/#Frank—Wolfe-method","page":"Frank-Wolfe","title":"Frank—Wolfe method","text":"","category":"section"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"Frank_Wolfe_method\nFrank_Wolfe_method!","category":"page"},{"location":"solvers/FrankWolfe/#Manopt.Frank_Wolfe_method","page":"Frank-Wolfe","title":"Manopt.Frank_Wolfe_method","text":"Frank_Wolfe_method(M, f, grad_f, p=rand(M))\nFrank_Wolfe_method(M, gradient_objective, p=rand(M); kwargs...)\nFrank_Wolfe_method!(M, f, grad_f, p; kwargs...)\nFrank_Wolfe_method!(M, gradient_objective, p; kwargs...)\n\nPerform the Frank-Wolfe algorithm to compute for mathcal C mathcal M the constrained problem\n\n operatorname*argmin_pmathcal C f(p)\n\nwhere the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)\n\n operatorname*argmin_q C operatornamegrad f(p_k) log_p_kq\n\nfor every iterate p_k together with a stepsize s_k1. The algorhtm can be performed in-place of p.\n\nThis algorithm is inspired by but slightly more general than [WS22].\n\nThe next iterate is then given by p_k+1 = γ_p_kq_k(s_k), where by default γ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nAlternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=DecreasingStepsize(; length=2.0, shift=2): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1.0e-6)): a functor indicating that the stopping criterion is fulfilled\nsub_cost=FrankWolfeCost(p, X): the cost of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=FrankWolfeGradient(p, X): the gradient of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective=ManifoldGradientObjective(sub_cost, sub_gradient): the objective for the Frank-Wolfe sub problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=GradientDescentState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\nsub_stopping_criterion=[StopAfterIteration](@ref)(300)[ | ](@ref StopWhenAny)[StopWhenStepsizeLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nOutput\n\nthe obtained (approximate) minimizer p^*, see get_solver_return for details\n\n\n\n\n\n","category":"function"},{"location":"solvers/FrankWolfe/#Manopt.Frank_Wolfe_method!","page":"Frank-Wolfe","title":"Manopt.Frank_Wolfe_method!","text":"Frank_Wolfe_method(M, f, grad_f, p=rand(M))\nFrank_Wolfe_method(M, gradient_objective, p=rand(M); kwargs...)\nFrank_Wolfe_method!(M, f, grad_f, p; kwargs...)\nFrank_Wolfe_method!(M, gradient_objective, p; kwargs...)\n\nPerform the Frank-Wolfe algorithm to compute for mathcal C mathcal M the constrained problem\n\n operatorname*argmin_pmathcal C f(p)\n\nwhere the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)\n\n operatorname*argmin_q C operatornamegrad f(p_k) log_p_kq\n\nfor every iterate p_k together with a stepsize s_k1. The algorhtm can be performed in-place of p.\n\nThis algorithm is inspired by but slightly more general than [WS22].\n\nThe next iterate is then given by p_k+1 = γ_p_kq_k(s_k), where by default γ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nAlternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=DecreasingStepsize(; length=2.0, shift=2): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1.0e-6)): a functor indicating that the stopping criterion is fulfilled\nsub_cost=FrankWolfeCost(p, X): the cost of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=FrankWolfeGradient(p, X): the gradient of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective=ManifoldGradientObjective(sub_cost, sub_gradient): the objective for the Frank-Wolfe sub problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=GradientDescentState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\nsub_stopping_criterion=[StopAfterIteration](@ref)(300)[ | ](@ref StopWhenAny)[StopWhenStepsizeLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nOutput\n\nthe obtained (approximate) minimizer p^*, see get_solver_return for details\n\n\n\n\n\n","category":"function"},{"location":"solvers/FrankWolfe/#State","page":"Frank-Wolfe","title":"State","text":"","category":"section"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"FrankWolfeState","category":"page"},{"location":"solvers/FrankWolfe/#Manopt.FrankWolfeState","page":"Frank-Wolfe","title":"Manopt.FrankWolfeState","text":"FrankWolfeState <: AbstractManoptSolverState\n\nA struct to store the current state of the Frank_Wolfe_method\n\nIt comes in two forms, depending on the realisation of the subproblem.\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\n\nThe sub task requires a method to solve\n\n operatorname*argmin_q C operatornamegrad f(p_k) log_p_kq\n\nConstructor\n\nFrankWolfeState(M, sub_problem, sub_state; kwargs...)\n\nInitialise the Frank Wolfe method state.\n\nFrankWolfeState(M, sub_problem; evaluation=AllocatingEvaluation(), kwargs...)\n\nInitialise the Frank Wolfe method state, where sub_problem is a closed form solution with evaluation as type of evaluation.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nsub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, FrankWolfeState): a functor inheriting from Stepsize to determine a step size\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nwhere the remaining fields from before are keyword arguments.\n\n\n\n\n\n","category":"type"},{"location":"solvers/FrankWolfe/#Helpers","page":"Frank-Wolfe","title":"Helpers","text":"","category":"section"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"For the inner sub-problem you can easily create the corresponding cost and gradient using","category":"page"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"FrankWolfeCost\nFrankWolfeGradient","category":"page"},{"location":"solvers/FrankWolfe/#Manopt.FrankWolfeCost","page":"Frank-Wolfe","title":"Manopt.FrankWolfeCost","text":"FrankWolfeCost{P,T}\n\nA structure to represent the oracle sub problem in the Frank_Wolfe_method. The cost function reads\n\nF(q) = X log_p q\n\nThe values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.\n\n\n\n\n\n","category":"type"},{"location":"solvers/FrankWolfe/#Manopt.FrankWolfeGradient","page":"Frank-Wolfe","title":"Manopt.FrankWolfeGradient","text":"FrankWolfeGradient{P,T}\n\nA structure to represent the gradient of the oracle sub problem in the Frank_Wolfe_method, that is for a given point p and a tangent vector X the function reads\n\nF(q) = X log_p q\n\nIts gradient can be computed easily using adjoint_differential_log_argument.\n\nThe values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.\n\n\n\n\n\n","category":"type"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"M. Weber and S. Sra. Riemannian Optimization via Frank-Wolfe Methods. Mathematical Programming 199, 525–556 (2022).\n\n\n\n","category":"page"},{"location":"tutorials/ImplementASolver/#How-to-implementing-your-own-solver","page":"Implement a solver","title":"How to implementing your own solver","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"When you have used a few solvers from Manopt.jl for example like in the opening tutorial Get started: optimize! you might come to the idea of implementing a solver yourself.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"After a short introduction of the algorithm we aim to implement, this tutorial first discusses the structural details, for example what a solver consists of and “works with”. Afterwards, we show how to implement the algorithm. Finally, we discuss how to make the algorithm both nice for the user as well as initialized in a way, that it can benefit from features already available in Manopt.jl.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"note: Note\nIf you have implemented your own solver, we would be very happy to have that within Manopt.jl as well, so maybe consider opening a Pull Request","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"using Manopt, Manifolds, Random","category":"page"},{"location":"tutorials/ImplementASolver/#Our-guiding-example:-a-random-walk-minimization","page":"Implement a solver","title":"Our guiding example: a random walk minimization","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Since most serious algorithms should be implemented in Manopt.jl themselves directly, we implement a solver that randomly walks on the manifold and keeps track of the lowest point visited. As for algorithms in Manopt.jl we aim to implement this generically for any manifold that is implemented using ManifoldsBase.jl.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"The random walk minimization","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Given:","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"a manifold mathcal M\na starting point p=p^(0)\na cost function f mathcal M ℝ.\na parameter sigma 0.\na retraction operatornameretr_p(X) that maps X T_pmathcal M to the manifold.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We can run the following steps of the algorithm","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"set k=0\nset our best point q = p^(0)\nRepeat until a stopping criterion is fulfilled\nChoose a random tangent vector X^(k) T_p^(k)mathcal M of length lVert X^(k) rVert = sigma\n“Walk” along this direction, that is p^(k+1) = operatornameretr_p^(k)(X^(k))\nIf f(p^(k+1)) f(q) set q = p^{(k+1)}$ as our new best visited point\nReturn q as the resulting best point we visited","category":"page"},{"location":"tutorials/ImplementASolver/#Preliminaries:-elements-a-solver-works-on","page":"Implement a solver","title":"Preliminaries: elements a solver works on","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"There are two main ingredients a solver needs: a problem to work on and the state of a solver, which “identifies” the solver and stores intermediate results.","category":"page"},{"location":"tutorials/ImplementASolver/#Specifying-the-task:-an-AbstractManoptProblem","page":"Implement a solver","title":"Specifying the task: an AbstractManoptProblem","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"A problem in Manopt.jl usually consists of a manifold (an AbstractManifold) and an AbstractManifoldObjective describing the function we have and its features. In our case the objective is (just) a ManifoldCostObjective that stores cost function f(M,p) -> R. More generally, it might for example store a gradient function or the Hessian or any other information we have about our task.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"This is something independent of the solver itself, since it only identifies the problem we want to solve independent of how we want to solve it, or in other words, this type contains all information that is static and independent of the specific solver at hand.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Usually the problems variable is called mp.","category":"page"},{"location":"tutorials/ImplementASolver/#Specifying-a-solver:-an-AbstractManoptSolverState","page":"Implement a solver","title":"Specifying a solver: an AbstractManoptSolverState","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Everything that is needed by a solver during the iterations, all its parameters, interim values that are needed beyond just one iteration, is stored in a subtype of the AbstractManoptSolverState. This identifies the solver uniquely.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"In our case we want to store five things","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"the current iterate p=p^(k)\nthe best visited point q\nthe variable sigma 0\nthe retraction operatornameretr to use (cf. retractions and inverse retractions)\na criterion, when to stop: a StoppingCriterion","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We can defined this as","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"mutable struct RandomWalkState{\n P,\n R<:AbstractRetractionMethod,\n S<:StoppingCriterion,\n} <: AbstractManoptSolverState\n p::P\n q::P\n σ::Float64\n retraction_method::R\n stop::S\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"The stopping criterion is usually stored in the state’s stop field. If you have a reason to do otherwise, you have one more function to implement (see next section). For ease of use, a constructor can be provided, that for example chooses a good default for the retraction based on a given manifold.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"function RandomWalkState(M::AbstractManifold, p::P=rand(M);\n σ = 0.1,\n retraction_method::R=default_retraction_method(M, typeof(p)),\n stopping_criterion::S=StopAfterIteration(200)\n) where {P, R<:AbstractRetractionMethod, S<:StoppingCriterion}\n return RandomWalkState{P,R,S}(p, copy(M, p), σ, retraction_method, stopping_criterion)\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Parametrising the state avoid that we have abstract typed fields. The keyword arguments for the retraction and stopping criterion are the ones usually used in Manopt.jl and provide an easy way to construct this state now.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"States usually have a shortened name as their variable, we use rws for our state here.","category":"page"},{"location":"tutorials/ImplementASolver/#Implementing-your-solver","page":"Implement a solver","title":"Implementing your solver","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"There is basically only two methods we need to implement for our solver","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"initialize_solver!(mp, rws) which initialises the solver before the first iteration\nstep_solver!(mp, rws, i) which implements the ith iteration, where i is given to you as the third parameter\nget_iterate(rws) which accesses the iterate from other places in the solver\nget_solver_result(rws) returning the solvers final (best) point we reached. By default this would return the last iterate rws.p (or more precisely calls get_iterate), but since we randomly walk and remember our best point in q, this has to return rws.q.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"The first two functions are in-place functions, that is they modify our solver state rws. You implement these by multiple dispatch on the types after importing said functions from Manopt:","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"import Manopt: initialize_solver!, step_solver!, get_iterate, get_solver_result","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"The state we defined before has two fields where we use the common names used in Manopt.jl, that is the StoppingCriterion is usually in stop and the iterate in p. If your choice is different, you need to reimplement","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"stop_solver!(mp, rws, i) to determine whether or not to stop after the ith iteration.\nget_iterate(rws) to access the current iterate","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We recommend to follow the general scheme with the stop field. If you have specific criteria when to stop, consider implementing your own stopping criterion instead.","category":"page"},{"location":"tutorials/ImplementASolver/#Initialization-and-iterate-access","page":"Implement a solver","title":"Initialization and iterate access","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"For our solver, there is not so much to initialize, just to be safe we should copy over the initial value in p we start with, to q. We do not have to care about remembering the iterate, that is done by Manopt.jl. For the iterate access we just have to pass p.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"function initialize_solver!(mp::AbstractManoptProblem, rws::RandomWalkState)\n copyto!(M, rws.q, rws.p) # Set p^{(0)} = q\n return rws\nend\nget_iterate(rws::RandomWalkState) = rws.p\nget_solver_result(rws::RandomWalkState) = rws.q","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"and similarly we implement the step. Here we make use of the fact that the problem (and also the objective in fact) have access functions for their elements, the one we need is get_cost.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"function step_solver!(mp::AbstractManoptProblem, rws::RandomWalkState, i)\n M = get_manifold(mp) # for ease of use get the manifold from the problem\n X = rand(M; vector_at=p) # generate a direction\n X .*= rws.σ/norm(M, p, X)\n # Walk\n retract!(M, rws.p, rws.p, X, rws.retraction_method)\n # is the new point better? Then store it\n if get_cost(mp, rws.p) < get_cost(mp, rws.q)\n copyto!(M, rws.p, rws.q)\n end\n return rws\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Performance wise we could improve the number of allocations by making X also a field of our rws but let’s keep it simple here. We could also store the cost of q in the state, but we shall see how to easily also enable this solver to allow for caching. In practice, however, it is preferable to cache intermediate values like cost of q in the state when it can be easily achieved. This way we do not have to deal with overheads of an external cache.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Now we can just run the solver already. We take the same example as for the other tutorials","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We first define our task, the Riemannian Center of Mass from the Get started: optimize! tutorial.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Random.seed!(23)\nn = 100\nσ = π / 8\nM = Sphere(2)\np = 1 / sqrt(2) * [1.0, 0.0, 1.0]\ndata = [exp(M, p, σ * rand(M; vector_at=p)) for i in 1:n];\nf(M, p) = sum(1 / (2 * n) * distance.(Ref(M), Ref(p), data) .^ 2)","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We can now generate the problem with its objective and the state","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"mp = DefaultManoptProblem(M, ManifoldCostObjective(f))\ns = RandomWalkState(M; σ = 0.2)\n\nsolve!(mp, s)\nget_solver_result(s)","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"3-element Vector{Float64}:\n -0.2412674850987521\n 0.8608618657176527\n -0.44800317943876844","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"The function solve! works also in place of s, but the last line illustrates how to access the result in general; we could also just look at s.p, but the function get_iterate is also used in several other places.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We could for example easily set up a second solver to work from a specified starting point with a different σ like","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"s2 = RandomWalkState(M, [1.0, 0.0, 0.0]; σ = 0.1)\nsolve!(mp, s2)\nget_solver_result(s2)","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"3-element Vector{Float64}:\n 1.0\n 0.0\n 0.0","category":"page"},{"location":"tutorials/ImplementASolver/#Ease-of-use-I:-a-high-level-interface","page":"Implement a solver","title":"Ease of use I: a high level interface","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Manopt.jl offers a few additional features for solvers in their high level interfaces, for example debug= for debug, record= keywords for debug and recording within solver states or count= and cache keywords for the objective.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We can introduce these here as well with just a few lines of code. There are usually two steps. We further need three internal function from Manopt.jl","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"using Manopt: get_solver_return, indicates_convergence, status_summary","category":"page"},{"location":"tutorials/ImplementASolver/#A-high-level-interface-using-the-objective","page":"Implement a solver","title":"A high level interface using the objective","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"This could be considered as an interim step to the high-level interface: if objective, a ManifoldCostObjective is already initialized, the high level interface consists of the steps","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"possibly decorate the objective\ngenerate the problem\ngenerate and possibly generate the state\ncall the solver\ndetermine the return value","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We illustrate the step with an in-place variant here. A variant that keeps the given start point unchanged would just add a copy(M, p) upfront. Manopt.jl provides both variants.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"function random_walk_algorithm!(\n M::AbstractManifold,\n mgo::ManifoldCostObjective,\n p;\n σ = 0.1,\n retraction_method::AbstractRetractionMethod=default_retraction_method(M, typeof(p)),\n stopping_criterion::StoppingCriterion=StopAfterIteration(200),\n kwargs...,\n)\n dmgo = decorate_objective!(M, mgo; kwargs...)\n dmp = DefaultManoptProblem(M, dmgo)\n s = RandomWalkState(M, [1.0, 0.0, 0.0];\n σ=0.1,\n retraction_method=retraction_method, stopping_criterion=stopping_criterion,\n )\n ds = decorate_state!(s; kwargs...)\n solve!(dmp, ds)\n return get_solver_return(get_objective(dmp), ds)\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"random_walk_algorithm! (generic function with 1 method)","category":"page"},{"location":"tutorials/ImplementASolver/#The-high-level-interface","page":"Implement a solver","title":"The high level interface","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Starting from the last section, the usual call a user would prefer is just passing a manifold M the cost f and maybe a start point p.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"function random_walk_algorithm!(M::AbstractManifold, f, p=rand(M); kwargs...)\n mgo = ManifoldCostObjective(f)\n return random_walk_algorithm!(M, mgo, p; kwargs...)\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"random_walk_algorithm! (generic function with 3 methods)","category":"page"},{"location":"tutorials/ImplementASolver/#Ease-of-Use-II:-the-state-summary","page":"Implement a solver","title":"Ease of Use II: the state summary","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"For the case that you set return_state=true the solver should return a summary of the run. When a show method is provided, users can easily read such summary in a terminal. It should reflect its main parameters, if they are not too verbose and provide information about the reason it stopped and whether this indicates convergence.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Here it would for example look like","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"import Base: show\nfunction show(io::IO, rws::RandomWalkState)\n i = get_count(rws, :Iterations)\n Iter = (i > 0) ? \"After $i iterations\\n\" : \"\"\n Conv = indicates_convergence(rws.stop) ? \"Yes\" : \"No\"\n s = \"\"\"\n # Solver state for `Manopt.jl`s Tutorial Random Walk\n $Iter\n ## Parameters\n * retraction method: $(rws.retraction_method)\n * σ : $(rws.σ)\n\n ## Stopping criterion\n\n $(status_summary(rws.stop))\n This indicates convergence: $Conv\"\"\"\n return print(io, s)\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Now the algorithm can be easily called and provides all features of a Manopt.jl algorithm. For example to see the summary, we could now just call","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"q = random_walk_algorithm!(M, f; return_state=true)","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"# Solver state for `Manopt.jl`s Tutorial Random Walk\nAfter 200 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n* σ : 0.1\n\n## Stopping criterion\n\nMax Iteration 200: reached\nThis indicates convergence: No","category":"page"},{"location":"tutorials/ImplementASolver/#Conclusion-and-beyond","page":"Implement a solver","title":"Conclusion & beyond","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We saw in this tutorial how to implement a simple cost-based algorithm, to illustrate how optimization algorithms are covered in Manopt.jl.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"One feature we did not cover is that most algorithms allow for in-place and allocation functions, as soon as they work on more than just the cost, for example use gradients, proximal maps or Hessians. This is usually a keyword argument of the objective and hence also part of the high-level interfaces.","category":"page"},{"location":"tutorials/ImplementASolver/#Technical-details","page":"Implement a solver","title":"Technical details","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"2024-11-21T20:38:16.611","category":"page"},{"location":"tutorials/HowToDebug/#How-to-print-debug-output","page":"Print debug output","title":"How to print debug output","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"This tutorial aims to illustrate how to perform debug output. For that we consider an example that includes a subsolver, to also consider their debug capabilities.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"The problem itself is hence not the main focus. We consider a nonnegative PCA which we can write as a constraint problem on the Sphere","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Let’s first load the necessary packages.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"using Manopt, Manifolds, Random, LinearAlgebra\nRandom.seed!(42);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"d = 4\nM = Sphere(d - 1)\nv0 = project(M, [ones(2)..., zeros(d - 2)...])\nZ = v0 * v0'\n#Cost and gradient\nf(M, p) = -tr(transpose(p) * Z * p) / 2\ngrad_f(M, p) = project(M, p, -transpose.(Z) * p / 2 - Z * p / 2)\n# Constraints\ng(M, p) = -p # now p ≥ 0\nmI = -Matrix{Float64}(I, d, d)\n# Vector of gradients of the constraint components\ngrad_g(M, p) = [project(M, p, mI[:, i]) for i in 1:d]","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Then we can take a starting point","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p0 = project(M, [ones(2)..., zeros(d - 3)..., 0.1])","category":"page"},{"location":"tutorials/HowToDebug/#Simple-debug-output","page":"Print debug output","title":"Simple debug output","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Any solver accepts the keyword debug=, which in the simplest case can be set to an array of strings, symbols and a number.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Strings are printed in every iteration as is (cf. DebugDivider) and should be used to finish the array with a line break.\nthe last number in the array is used with DebugEvery to print the debug only every ith iteration.\nAny Symbol is converted into certain debug prints","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Certain symbols starting with a capital letter are mapped to certain prints, for example :Cost is mapped to DebugCost() to print the current cost function value. A full list is provided in the DebugActionFactory. A special keyword is :Stop, which is only added to the final debug hook to print the stopping criterion.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Any symbol with a small letter is mapped to fields of the AbstractManoptSolverState which is used. This way you can easily print internal data, if you know their names.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Let’s look at an example first: if we want to print the current iteration number, the current cost function value as well as the value ϵ from the ExactPenaltyMethodState. To keep the amount of print at a reasonable level, we want to only print the debug every twenty-fifth iteration.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Then we can write","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p1 = exact_penalty_method(\n M, f, grad_f, p0; g=g, grad_g=grad_g,\n debug = [:Iteration, :Cost, \" | \", (:ϵ,\"ϵ: %.8f\"), 25, \"\\n\", :Stop]\n);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Initial f(x): -0.497512 | ϵ: 0.00100000\n# 25 f(x): -0.499449 | ϵ: 0.00017783\n# 50 f(x): -0.499996 | ϵ: 0.00003162\n# 75 f(x): -0.500000 | ϵ: 0.00000562\n# 100 f(x): -0.500000 | ϵ: 0.00000100\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.","category":"page"},{"location":"tutorials/HowToDebug/#Specifying-when-to-print-something","page":"Print debug output","title":"Specifying when to print something","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"While in the last step, we specified what to print, this can be extend to even specify when to print it. Currently the following four “places” are available, ordered by when they appear in an algorithm run.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":":Start to print something at the start of the algorithm. At this place all other (the following) places are “reset”, by triggering each of them with an iteration number 0\n:BeforeIteration to print something before an iteration starts\n:Iteration to print something after an iteration. For example the group of prints from the last code block [:Iteration, :Cost, \" | \", :ϵ, 25,] is added to this entry.\n:Stop to print something when the algorithm stops. In the example, the :Stop adds the DebugStoppingCriterion is added to this place.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Specifying something especially for one of these places is done by specifying a Pair, so for example :BeforeIteration => :Iteration would add the display of the iteration number to be printed before the iteration is performed.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Changing this in the run does not change the output. Being more precise for the other entries, we could also write","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p1 = exact_penalty_method(\n M, f, grad_f, p0; g=g, grad_g=grad_g,\n debug = [\n :BeforeIteration => [:Iteration],\n :Iteration => [:Cost, \" | \", :ϵ, \"\\n\"],\n :Stop => DebugStoppingCriterion(),\n 25,\n ],\n);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Initial f(x): -0.497512 | ϵ: 0.001\n# 25 f(x): -0.499449 | ϵ: 0.0001778279410038921\n# 50 f(x): -0.499996 | ϵ: 3.1622776601683734e-5\n# 75 f(x): -0.500000 | ϵ: 5.623413251903474e-6\n# 100 f(x): -0.500000 | ϵ: 1.0e-6\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"This also illustrates, that instead of Symbols we can also always pass down a DebugAction directly, for example when there is a reason to create or configure the action more individually than the default from the symbol. Note that the number (25) yields that all but :Start and :Stop are only displayed every twenty-fifth iteration.","category":"page"},{"location":"tutorials/HowToDebug/#Subsolver-debug","page":"Print debug output","title":"Subsolver debug","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Subsolvers have a sub_kwargs keyword, such that you can pass keywords to the sub solver as well. This works well if you do not plan to change the subsolver. If you do you can wrap your own solver_state= argument in a decorate_state! and pass a debug= password to this function call. Keywords in a keyword have to be passed as pairs (:debug => [...]).","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"For most debugs, there further exists a longer form to specify the format to print. We want to use this to specify the format to print ϵ. This is done by putting the corresponding symbol together with the string to use in formatting into a tuple like (:ϵ,\" | ϵ: %.8f\"), where we can already include the divider as well.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"A main problem now is, that this debug is issued every sub solver call or initialisation, as the following print of just a . per sub solver test/call illustrates","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p3 = exact_penalty_method(\n M, f, grad_f, p0; g=g, grad_g=grad_g,\n debug = [\"\\n\",:Iteration, DebugCost(), (:ϵ,\" | ϵ: %.8f\"), 25, \"\\n\", :Stop],\n sub_kwargs = [:debug => [\".\"]]\n);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Initial f(x): -0.497512 | ϵ: 0.00100000\n....................................................................................\n# 25 f(x): -0.499449 | ϵ: 0.00017783\n.......................................................................\n# 50 f(x): -0.499996 | ϵ: 0.00003162\n..................................................\n# 75 f(x): -0.500000 | ϵ: 0.00000562\n..................................................\n# 100 f(x): -0.500000 | ϵ: 0.00000100\n....The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"The different lengths of the dotted lines come from the fact that —at least in the beginning— the subsolver performs a few steps and each subsolvers step prints a dot.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"For this issue, there is the next symbol (similar to the :Stop) to indicate that a debug set is a subsolver set :WhenActive, which introduces a DebugWhenActive that is only activated when the outer debug is actually active, or inother words DebugEvery is active itself. Furthermore, we want to print the iteration number before printing the subsolvers steps, so we put this into a Pair, but we can leave the remaining ones as single entries. Finally we also prefix :Stop with \" | \" and print the iteration number at the time we stop. We get","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p4 = exact_penalty_method(\n M,\n f,\n grad_f,\n p0;\n g=g,\n grad_g=grad_g,\n debug=[\n :BeforeIteration => [:Iteration, \"\\n\"],\n :Iteration => [DebugCost(), (:ϵ, \" | ϵ: %.8f\"), \"\\n\"],\n :Stop,\n 25,\n ],\n sub_kwargs=[\n :debug => [\n \" | \",\n :Iteration,\n :Cost,\n \"\\n\",\n :WhenActive,\n :Stop => [(:Stop, \" | \"), \" | stopped after iteration \", :Iteration, \"\\n\"],\n ],\n ],\n);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Initial \nf(x): -0.497512 | ϵ: 0.00100000\n | Initial f(x): -0.498944\n | # 1 f(x): -0.498969\n | The algorithm reached approximately critical point after 1 iterations; the gradient norm (3.4995246389869776e-5) is less than 0.001.\n | stopped after iteration # 1 \n# 25 \nf(x): -0.499449 | ϵ: 0.00017783\n | Initial f(x): -0.499992\n | # 1 f(x): -0.499992\n | # 2 f(x): -0.499992\n | The algorithm reached approximately critical point after 2 iterations; the gradient norm (0.00027436723916614346) is less than 0.001.\n | stopped after iteration # 2 \n# 50 \nf(x): -0.499996 | ϵ: 0.00003162\n | Initial f(x): -0.500000\n | # 1 f(x): -0.500000\n | The algorithm reached approximately critical point after 1 iterations; the gradient norm (5.000404403277298e-6) is less than 0.001.\n | stopped after iteration # 1 \n# 75 \nf(x): -0.500000 | ϵ: 0.00000562\n | Initial f(x): -0.500000\n | # 1 f(x): -0.500000\n | The algorithm reached approximately critical point after 1 iterations; the gradient norm (4.202215558182483e-6) is less than 0.001.\n | stopped after iteration # 1 \n# 100 \nf(x): -0.500000 | ϵ: 0.00000100\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"where we now see that the subsolver always only requires one step. Note that since debug of an iteration is happening after a step, we see the sub solver run before the debug for an iteration number.","category":"page"},{"location":"tutorials/HowToDebug/#Advanced-debug-output","page":"Print debug output","title":"Advanced debug output","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"There is two more advanced variants that can be used. The first is a tuple of a symbol and a string, where the string is used as the format print, that most DebugActions have. The second is, to directly provide a DebugAction.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"We can for example change the way the :ϵ is printed by adding a format string and use DebugCost() which is equivalent to using :Cost. Especially with the format change, the lines are more consistent in length.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p2 = exact_penalty_method(\n M, f, grad_f, p0; g=g, grad_g=grad_g,\n debug = [:Iteration, DebugCost(), (:ϵ,\" | ϵ: %.8f\"), 25, \"\\n\", :Stop]\n);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Initial f(x): -0.497512 | ϵ: 0.00100000\n# 25 f(x): -0.499449 | ϵ: 0.00017783\n# 50 f(x): -0.499996 | ϵ: 0.00003162\n# 75 f(x): -0.500000 | ϵ: 0.00000562\n# 100 f(x): -0.500000 | ϵ: 0.00000100\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"You can also write your own DebugAction functor, where the function to implement has the same signature as the step function, that is an AbstractManoptProblem, an AbstractManoptSolverState, as well as the current iterate. For example the already mentionedDebugDivider(s) is given as","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"mutable struct DebugDivider{TIO<:IO} <: DebugAction\n io::TIO\n divider::String\n DebugDivider(divider=\" | \"; io::IO=stdout) = new{typeof(io)}(io, divider)\nend\nfunction (d::DebugDivider)(::AbstractManoptProblem, ::AbstractManoptSolverState, k::Int)\n (k >= 0) && (!isempty(d.divider)) && (print(d.io, d.divider))\n return nothing\nend","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"or you could implement that of course just for your specific problem or state.","category":"page"},{"location":"tutorials/HowToDebug/#Technical-details","page":"Print debug output","title":"Technical details","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"2024-11-21T20:37:25.498","category":"page"},{"location":"solvers/particle_swarm/#Particle-swarm-optimization","page":"Particle Swarm Optimization","title":"Particle swarm optimization","text":"","category":"section"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":" particle_swarm\n particle_swarm!","category":"page"},{"location":"solvers/particle_swarm/#Manopt.particle_swarm","page":"Particle Swarm Optimization","title":"Manopt.particle_swarm","text":"patricle_swarm(M, f; kwargs...)\npatricle_swarm(M, f, swarm; kwargs...)\npatricle_swarm(M, mco::AbstractManifoldCostObjective; kwargs..)\npatricle_swarm(M, mco::AbstractManifoldCostObjective, swarm; kwargs..)\nparticle_swarm!(M, f, swarm; kwargs...)\nparticle_swarm!(M, mco::AbstractManifoldCostObjective, swarm; kwargs..)\n\nperform the particle swarm optimization algorithm (PSO) to solve\n\noperatornameargmin_p mathcal M f(p)\n\nPSO starts with an initial swarm [BIA10] of points on the manifold. If no swarm is provided, the swarm_size keyword is used to generate random points. The computation can be perfomed in-place of swarm.\n\nTo this end, a swarm S = s_1 ldots s_n of particles is moved around the manifold M in the following manner. For every particle s_k^(i) the new particle velocities X_k^(i) are computed in every step i of the algorithm by\n\nX_k^(i) = ω mathcal T_s_k^(i)s_k^(i-1) X_k^(i-1) + c r_1 operatornameretr^-1_s_k^(i)(p_k^(i)) + s r_2 operatornameretr^-1_s_k^(i)(p)\n\nwhere\n\ns_k^(i) is the current particle position,\nω denotes the inertia,\nc and s are a cognitive and a social weight, respectively,\nr_j, j=12 are random factors which are computed new for each particle and step\n\\mathcal T_{⋅←⋅} is a vector transport, and\n\\operatorname{retr}^{-1} is an inverse retraction\n\nThen the position of the particle is updated as\n\ns_k^(i+1) = operatornameretr_s_k^(i)(X_k^(i))\n\nThen the single particles best entries p_k^(i) are updated as\n\np_k^(i+1) = begincases\ns_k^(i+1) textif F(s_k^(i+1))F(p_k^(i))\np_k^(i) textelse\nendcases\n\nand the global best position\n\ng^(i+1) = begincases\np_k^(i+1) textif F(p_k^(i+1))F(g_k^(i))\ng_k^(i) textelse\nendcases\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\nswarm = [rand(M) for _ in 1:swarm_size]: an initial swarm of points.\n\nInstead of a cost function f you can also provide an AbstractManifoldCostObjective mco.\n\nKeyword Arguments\n\ncognitive_weight=1.4: a cognitive weight factor\ninertia=0.65: the inertia of the particles\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nsocial_weight=1.4: a social weight factor\nswarm_size=100: swarm size, if it should be generated randomly\nstopping_criterion=StopAfterIteration(500)|StopWhenChangeLess(1e-4): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nvelocity: a set of tangent vectors (of type AbstractVector{T}) representing the velocities of the particles, per default a random tangent vector per initial position\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively. If you provide the objective directly, these decorations can still be specified\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/particle_swarm/#Manopt.particle_swarm!","page":"Particle Swarm Optimization","title":"Manopt.particle_swarm!","text":"patricle_swarm(M, f; kwargs...)\npatricle_swarm(M, f, swarm; kwargs...)\npatricle_swarm(M, mco::AbstractManifoldCostObjective; kwargs..)\npatricle_swarm(M, mco::AbstractManifoldCostObjective, swarm; kwargs..)\nparticle_swarm!(M, f, swarm; kwargs...)\nparticle_swarm!(M, mco::AbstractManifoldCostObjective, swarm; kwargs..)\n\nperform the particle swarm optimization algorithm (PSO) to solve\n\noperatornameargmin_p mathcal M f(p)\n\nPSO starts with an initial swarm [BIA10] of points on the manifold. If no swarm is provided, the swarm_size keyword is used to generate random points. The computation can be perfomed in-place of swarm.\n\nTo this end, a swarm S = s_1 ldots s_n of particles is moved around the manifold M in the following manner. For every particle s_k^(i) the new particle velocities X_k^(i) are computed in every step i of the algorithm by\n\nX_k^(i) = ω mathcal T_s_k^(i)s_k^(i-1) X_k^(i-1) + c r_1 operatornameretr^-1_s_k^(i)(p_k^(i)) + s r_2 operatornameretr^-1_s_k^(i)(p)\n\nwhere\n\ns_k^(i) is the current particle position,\nω denotes the inertia,\nc and s are a cognitive and a social weight, respectively,\nr_j, j=12 are random factors which are computed new for each particle and step\n\\mathcal T_{⋅←⋅} is a vector transport, and\n\\operatorname{retr}^{-1} is an inverse retraction\n\nThen the position of the particle is updated as\n\ns_k^(i+1) = operatornameretr_s_k^(i)(X_k^(i))\n\nThen the single particles best entries p_k^(i) are updated as\n\np_k^(i+1) = begincases\ns_k^(i+1) textif F(s_k^(i+1))F(p_k^(i))\np_k^(i) textelse\nendcases\n\nand the global best position\n\ng^(i+1) = begincases\np_k^(i+1) textif F(p_k^(i+1))F(g_k^(i))\ng_k^(i) textelse\nendcases\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\nswarm = [rand(M) for _ in 1:swarm_size]: an initial swarm of points.\n\nInstead of a cost function f you can also provide an AbstractManifoldCostObjective mco.\n\nKeyword Arguments\n\ncognitive_weight=1.4: a cognitive weight factor\ninertia=0.65: the inertia of the particles\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nsocial_weight=1.4: a social weight factor\nswarm_size=100: swarm size, if it should be generated randomly\nstopping_criterion=StopAfterIteration(500)|StopWhenChangeLess(1e-4): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nvelocity: a set of tangent vectors (of type AbstractVector{T}) representing the velocities of the particles, per default a random tangent vector per initial position\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively. If you provide the objective directly, these decorations can still be specified\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/particle_swarm/#State","page":"Particle Swarm Optimization","title":"State","text":"","category":"section"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"ParticleSwarmState","category":"page"},{"location":"solvers/particle_swarm/#Manopt.ParticleSwarmState","page":"Particle Swarm Optimization","title":"Manopt.ParticleSwarmState","text":"ParticleSwarmState{P,T} <: AbstractManoptSolverState\n\nDescribes a particle swarm optimizing algorithm, with\n\nFields\n\ncognitive_weight: a cognitive weight factor\ninertia: the inertia of the particles\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nsocial_weight: a social weight factor\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nvelocity: a set of tangent vectors (of type AbstractVector{T}) representing the velocities of the particles\n\nInternal and temporary fields\n\ncognitive_vector: temporary storage for a tangent vector related to cognitive_weight\np::P: a point on the manifold mathcal M storing the best point visited by all particles\npositional_best: storing the best position p_i every single swarm participant visited\nq::P: a point on the manifold mathcal M serving as temporary storage for interims results; avoids allocations\nsocial_vec: temporary storage for a tangent vector related to social_weight\nswarm: a set of points (of type AbstractVector{P}) on a manifold a_i_i=1^N\n\nConstructor\n\nParticleSwarmState(M, initial_swarm, velocity; kawrgs...)\n\nconstruct a particle swarm solver state for the manifold M starting with the initial population initial_swarm with velocities. The p used in the following defaults is the type of one point from the swarm.\n\nKeyword arguments\n\ncognitive_weight=1.4\ninertia=0.65\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nsocial_weight=1.4\nstopping_criterion=StopAfterIteration(500)|StopWhenChangeLess(1e-4): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nparticle_swarm\n\n\n\n\n\n","category":"type"},{"location":"solvers/particle_swarm/#Stopping-criteria","page":"Particle Swarm Optimization","title":"Stopping criteria","text":"","category":"section"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"StopWhenSwarmVelocityLess","category":"page"},{"location":"solvers/particle_swarm/#Manopt.StopWhenSwarmVelocityLess","page":"Particle Swarm Optimization","title":"Manopt.StopWhenSwarmVelocityLess","text":"StopWhenSwarmVelocityLess <: StoppingCriterion\n\nStopping criterion for particle_swarm, when the velocity of the swarm is less than a threshold.\n\nFields\n\nthreshold: the threshold\nat_iteration: store the iteration the stopping criterion was (last) fulfilled\nreason: store the reason why the stopping criterion was fulfilled, see get_reason\nvelocity_norms: interim vector to store the norms of the velocities before computing its norm\n\nConstructor\n\nStopWhenSwarmVelocityLess(tolerance::Float64)\n\ninitialize the stopping criterion to a certain tolerance.\n\n\n\n\n\n","category":"type"},{"location":"solvers/particle_swarm/#sec-arc-technical-details","page":"Particle Swarm Optimization","title":"Technical details","text":"","category":"section"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"The particle_swarm solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= does not have to be specified.\nBy default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nTangent vectors storing the social and cognitive vectors are initialized calling zero_vector(M,p).\nA `copyto!(M, q, p) and copy(M,p) for points.\nThe distance(M, p, q) when using the default stopping criterion, which uses StopWhenChangeLess.","category":"page"},{"location":"solvers/particle_swarm/#Literature","page":"Particle Swarm Optimization","title":"Literature","text":"","category":"section"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"P. B. Borckmans, M. Ishteva and P.-A. Absil. A Modified Particle Swarm Optimization Algorithm for the Best Low Multilinear Rank Approximation of Higher-Order Tensors. In: 7th International Conference on Swarm INtelligence (Springer Berlin Heidelberg, 2010); pp. 13–23.\n\n\n\n","category":"page"},{"location":"solvers/stochastic_gradient_descent/#Stochastic-gradient-descent","page":"Stochastic Gradient Descent","title":"Stochastic gradient descent","text":"","category":"section"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"stochastic_gradient_descent\nstochastic_gradient_descent!","category":"page"},{"location":"solvers/stochastic_gradient_descent/#Manopt.stochastic_gradient_descent","page":"Stochastic Gradient Descent","title":"Manopt.stochastic_gradient_descent","text":"stochastic_gradient_descent(M, grad_f, p=rand(M); kwargs...)\nstochastic_gradient_descent(M, msgo; kwargs...)\nstochastic_gradient_descent!(M, grad_f, p; kwargs...)\nstochastic_gradient_descent!(M, msgo, p; kwargs...)\n\nperform a stochastic gradient descent. This can be perfomed in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\ngrad_f: a gradient function, that either returns a vector of the gradients or is a vector of gradient functions\np: a point on the manifold mathcal M\n\nalternatively to the gradient you can provide an ManifoldStochasticGradientObjective msgo, then using the cost= keyword does not have any effect since if so, the cost is already within the objective.\n\nKeyword arguments\n\ncost=missing: you can provide a cost function for example to track the function value\ndirection=StochasticGradient([zerovector](@extrefManifoldsBase.zerovector-Tuple{AbstractManifold, Any})(M, p)`)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Random: specify whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Linear) or the default :Random one.\norder_type=:RandomOder: a type of ordering of gradient evaluations. Possible values are :RandomOrder, a :FixedPermutation, :LinearOrder\nstopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, StochasticGradientDescentState): a functor inheriting from Stepsize to determine a step size\norder=[1:n]: the initial permutation, where n is the number of gradients in gradF.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/stochastic_gradient_descent/#Manopt.stochastic_gradient_descent!","page":"Stochastic Gradient Descent","title":"Manopt.stochastic_gradient_descent!","text":"stochastic_gradient_descent(M, grad_f, p=rand(M); kwargs...)\nstochastic_gradient_descent(M, msgo; kwargs...)\nstochastic_gradient_descent!(M, grad_f, p; kwargs...)\nstochastic_gradient_descent!(M, msgo, p; kwargs...)\n\nperform a stochastic gradient descent. This can be perfomed in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\ngrad_f: a gradient function, that either returns a vector of the gradients or is a vector of gradient functions\np: a point on the manifold mathcal M\n\nalternatively to the gradient you can provide an ManifoldStochasticGradientObjective msgo, then using the cost= keyword does not have any effect since if so, the cost is already within the objective.\n\nKeyword arguments\n\ncost=missing: you can provide a cost function for example to track the function value\ndirection=StochasticGradient([zerovector](@extrefManifoldsBase.zerovector-Tuple{AbstractManifold, Any})(M, p)`)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Random: specify whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Linear) or the default :Random one.\norder_type=:RandomOder: a type of ordering of gradient evaluations. Possible values are :RandomOrder, a :FixedPermutation, :LinearOrder\nstopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, StochasticGradientDescentState): a functor inheriting from Stepsize to determine a step size\norder=[1:n]: the initial permutation, where n is the number of gradients in gradF.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/stochastic_gradient_descent/#State","page":"Stochastic Gradient Descent","title":"State","text":"","category":"section"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"StochasticGradientDescentState\nManopt.default_stepsize(::AbstractManifold, ::Type{StochasticGradientDescentState})","category":"page"},{"location":"solvers/stochastic_gradient_descent/#Manopt.StochasticGradientDescentState","page":"Stochastic Gradient Descent","title":"Manopt.StochasticGradientDescentState","text":"StochasticGradientDescentState <: AbstractGradientDescentSolverState\n\nStore the following fields for a default stochastic gradient descent algorithm, see also ManifoldStochasticGradientObjective and stochastic_gradient_descent.\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\ndirection: a direction update to use\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nevaluation_order: specify whether to use a randomly permuted sequence (:FixedRandom:), a per cycle permuted sequence (:Linear) or the default, a :Random sequence.\norder: stores the current permutation\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\n\nConstructor\n\nStochasticGradientDescentState(M::AbstractManifold; kwargs...)\n\nCreate a StochasticGradientDescentState with start point p.\n\nKeyword arguments\n\ndirection=StochasticGradientRule(M, [zerovector](@extrefManifoldsBase.zerovector-Tuple{AbstractManifold, Any})(M, p)`)\norder_type=:RandomOrder`\norder=Int[]: specify how to store the order of indices for the next epoche\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, StochasticGradientDescentState): a functor inheriting from Stepsize to determine a step size\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\n\n\n\n\n","category":"type"},{"location":"solvers/stochastic_gradient_descent/#Manopt.default_stepsize-Tuple{AbstractManifold, Type{StochasticGradientDescentState}}","page":"Stochastic Gradient Descent","title":"Manopt.default_stepsize","text":"default_stepsize(M::AbstractManifold, ::Type{StochasticGradientDescentState})\n\nDeinfe the default step size computed for the StochasticGradientDescentState, which is ConstantStepsizeM.\n\n\n\n\n\n","category":"method"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"Additionally, the options share a DirectionUpdateRule, so you can also apply MomentumGradient and AverageGradient here. The most inner one should always be.","category":"page"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"StochasticGradient","category":"page"},{"location":"solvers/stochastic_gradient_descent/#Manopt.StochasticGradient","page":"Stochastic Gradient Descent","title":"Manopt.StochasticGradient","text":"StochasticGradient(; kwargs...)\nStochasticGradient(M::AbstractManifold; kwargs...)\n\nKeyword arguments\n\ninitial_gradient=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\np=rand(M): a point on the manifold mathcal Mto specify the initial value\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for StochasticGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"which internally uses","category":"page"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"AbstractGradientGroupDirectionRule\nStochasticGradientRule","category":"page"},{"location":"solvers/stochastic_gradient_descent/#Manopt.AbstractGradientGroupDirectionRule","page":"Stochastic Gradient Descent","title":"Manopt.AbstractGradientGroupDirectionRule","text":"AbstractStochasticGradientDescentSolverState <: AbstractManoptSolverState\n\nA generic type for all options related to gradient descent methods working with parts of the total gradient\n\n\n\n\n\n","category":"type"},{"location":"solvers/stochastic_gradient_descent/#Manopt.StochasticGradientRule","page":"Stochastic Gradient Descent","title":"Manopt.StochasticGradientRule","text":"StochasticGradientRule<: AbstractGradientGroupDirectionRule\n\nCreate a functor (problem, state k) -> (s,X) to evaluate the stochatsic gradient, that is chose a random index from the state and use the internal field for evaluation of the gradient in-place.\n\nThe default gradient processor, which just evaluates the (stochastic) gradient or a subset thereof.\n\nFields\n\nX::T: a tangent vector at the point p on the manifold mathcal M\n\nConstructor\n\nStochasticGradientRule(M::AbstractManifold; p=rand(M), X=zero_vector(M, p))\n\nInitialize the stochastic gradient processor with tangent vector type of X, where both M and p are just help variables.\n\nSee also\n\nstochastic_gradient_descent, [StochasticGradient])@ref)\n\n\n\n\n\n","category":"type"},{"location":"solvers/stochastic_gradient_descent/#sec-sgd-technical-details","page":"Stochastic Gradient Descent","title":"Technical details","text":"","category":"section"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"The stochastic_gradient_descent solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.","category":"page"},{"location":"solvers/proximal_bundle_method/#Proximal-bundle-method","page":"Proximal bundle method","title":"Proximal bundle method","text":"","category":"section"},{"location":"solvers/proximal_bundle_method/","page":"Proximal bundle method","title":"Proximal bundle method","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/proximal_bundle_method/","page":"Proximal bundle method","title":"Proximal bundle method","text":"proximal_bundle_method\nproximal_bundle_method!","category":"page"},{"location":"solvers/proximal_bundle_method/#Manopt.proximal_bundle_method","page":"Proximal bundle method","title":"Manopt.proximal_bundle_method","text":"proximal_bundle_method(M, f, ∂f, p=rand(M), kwargs...)\nproximal_bundle_method!(M, f, ∂f, p, kwargs...)\n\nperform a proximal bundle method p^(k+1) = operatornameretr_p^(k)(-d_k), where operatornameretr is a retraction and\n\nd_k = frac1mu_k sum_jin J_k λ_j^k mathrmP_p_kq_jX_q_j\n\nwith X_q_j f(q_j), p_k the last serious iterate, mu_k a proximal parameter, and the λ_j^k as solutions to the quadratic subproblem provided by the sub solver, see for example the proximal_bundle_method_subsolver.\n\nThough the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.\n\nFor more details see [HNP23].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nα₀=1.2: initialization value for α, used to update η\nbundle_size=50: the maximal size of the bundle\nδ=1.0: parameter for updating μ: if δ 0 then μ = log(i + 1), else μ += δ μ\nε=1e-2: stepsize-like parameter related to the injectivity radius of the manifold\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nm=0.0125: a real number that controls the decrease of the cost function\nμ=0.5: initial proximal parameter for the subproblem\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nsub_problem=proximal_bundle_method_subsolver`: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=AllocatingEvaluation: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/proximal_bundle_method/#Manopt.proximal_bundle_method!","page":"Proximal bundle method","title":"Manopt.proximal_bundle_method!","text":"proximal_bundle_method(M, f, ∂f, p=rand(M), kwargs...)\nproximal_bundle_method!(M, f, ∂f, p, kwargs...)\n\nperform a proximal bundle method p^(k+1) = operatornameretr_p^(k)(-d_k), where operatornameretr is a retraction and\n\nd_k = frac1mu_k sum_jin J_k λ_j^k mathrmP_p_kq_jX_q_j\n\nwith X_q_j f(q_j), p_k the last serious iterate, mu_k a proximal parameter, and the λ_j^k as solutions to the quadratic subproblem provided by the sub solver, see for example the proximal_bundle_method_subsolver.\n\nThough the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.\n\nFor more details see [HNP23].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nα₀=1.2: initialization value for α, used to update η\nbundle_size=50: the maximal size of the bundle\nδ=1.0: parameter for updating μ: if δ 0 then μ = log(i + 1), else μ += δ μ\nε=1e-2: stepsize-like parameter related to the injectivity radius of the manifold\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nm=0.0125: a real number that controls the decrease of the cost function\nμ=0.5: initial proximal parameter for the subproblem\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nsub_problem=proximal_bundle_method_subsolver`: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=AllocatingEvaluation: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/proximal_bundle_method/#State","page":"Proximal bundle method","title":"State","text":"","category":"section"},{"location":"solvers/proximal_bundle_method/","page":"Proximal bundle method","title":"Proximal bundle method","text":"ProximalBundleMethodState","category":"page"},{"location":"solvers/proximal_bundle_method/#Manopt.ProximalBundleMethodState","page":"Proximal bundle method","title":"Manopt.ProximalBundleMethodState","text":"ProximalBundleMethodState <: AbstractManoptSolverState\n\nstores option values for a proximal_bundle_method solver.\n\nFields\n\nα: curvature-dependent parameter used to update η\nα₀: initialization value for α, used to update η\napprox_errors: approximation of the linearization errors at the last serious step\nbundle: bundle that collects each iterate with the computed subgradient at the iterate\nbundle_size: the maximal size of the bundle\nc: convex combination of the approximation errors\nd: descent direction\nδ: parameter for updating μ: if δ 0 then μ = log(i + 1), else μ += δ μ\nε: stepsize-like parameter related to the injectivity radius of the manifold\nη: curvature-dependent term for updating the approximation errors\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nλ: convex coefficients that solve the subproblem\nm: the parameter to test the decrease of the cost\nμ: (initial) proximal parameter for the subproblem\nν: the stopping parameter given by ν = - μ d^2 - c\np::P: a point on the manifold mathcal Mstoring the current iterate\np_last_serious: last serious iterate\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\ntransported_subgradients: subgradients of the bundle that are transported to p_last_serious\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring a subgradient at the current iterate\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nConstructor\n\nProximalBundleMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)\nProximalBundleMethodState(M::AbstractManifold, sub_problem=proximal_bundle_method_subsolver; evaluation=AllocatingEvaluation(), kwargs...)\n\nGenerate the state for the proximal_bundle_method on the manifold M\n\nKeyword arguments\n\nα₀=1.2\nbundle_size=50\nδ=1.0\nε=1e-2\nμ=0.5\nm=0.0125\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nsub_problem=proximal_bundle_method_subsolver`: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=AllocatingEvaluation: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nX=zero_vector(M, p) specify the type of tangent vector to use.\n\n\n\n\n\n","category":"type"},{"location":"solvers/proximal_bundle_method/#Helpers-and-internal-functions","page":"Proximal bundle method","title":"Helpers and internal functions","text":"","category":"section"},{"location":"solvers/proximal_bundle_method/","page":"Proximal bundle method","title":"Proximal bundle method","text":"proximal_bundle_method_subsolver","category":"page"},{"location":"solvers/proximal_bundle_method/#Manopt.proximal_bundle_method_subsolver","page":"Proximal bundle method","title":"Manopt.proximal_bundle_method_subsolver","text":"λ = proximal_bundle_method_subsolver(M, p_last_serious, μ, approximation_errors, transported_subgradients)\nproximal_bundle_method_subsolver!(M, λ, p_last_serious, μ, approximation_errors, transported_subgradients)\n\nsolver for the subproblem of the proximal bundle method.\n\nThe subproblem for the proximal bundle method is\n\nbeginalign*\n operatorname*argmin_λ ℝ^lvert L_lrvert \n frac12 mu_l BigllVert sum_j L_l λ_j mathrmP_p_kq_j X_q_j BigrrVert^2\n + sum_j L_l λ_j c_j^k\n \n texts t quad \n sum_j L_l λ_j = 1\n quad λ_j 0\n quad textfor all j L_l\nendalign*\n\nwhere L_l = k if q_k is a serious iterate, and L_l = L_l-1 cup k otherwise. See [HNP23].\n\ntip: Tip\nA default subsolver based on RipQP.jl and QuadraticModels is available if these two packages are loaded.\n\n\n\n\n\n","category":"function"},{"location":"solvers/proximal_bundle_method/#Literature","page":"Proximal bundle method","title":"Literature","text":"","category":"section"},{"location":"solvers/proximal_bundle_method/","page":"Proximal bundle method","title":"Proximal bundle method","text":"N. Hoseini Monjezi, S. Nobakhtian and M. R. Pouryayevali. A proximal bundle algorithm for nonsmooth optimization on Riemannian manifolds. IMA Journal of Numerical Analysis 43, 293–325 (2023).\n\n\n\n","category":"page"},{"location":"solvers/cyclic_proximal_point/#Cyclic-proximal-point","page":"Cyclic Proximal Point","title":"Cyclic proximal point","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"The Cyclic Proximal Point (CPP) algorithm aims to minimize","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"F(x) = sum_i=1^c f_i(x)","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"assuming that the proximal maps operatornameprox_λ f_i(x) are given in closed form or can be computed efficiently (at least approximately).","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"The algorithm then cycles through these proximal maps, where the type of cycle might differ and the proximal parameter λ_k changes after each cycle k.","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"For a convergence result on Hadamard manifolds see Bačák [Bac14].","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"cyclic_proximal_point\ncyclic_proximal_point!","category":"page"},{"location":"solvers/cyclic_proximal_point/#Manopt.cyclic_proximal_point","page":"Cyclic Proximal Point","title":"Manopt.cyclic_proximal_point","text":"cyclic_proximal_point(M, f, proxes_f, p; kwargs...)\ncyclic_proximal_point(M, mpo, p; kwargs...)\ncyclic_proximal_point!(M, f, proxes_f; kwargs...)\ncyclic_proximal_point!(M, mpo; kwargs...)\n\nperform a cyclic proximal point algorithm. This can be done in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal Mℝ to minimize\nproxes_f: an Array of proximal maps (Functions) (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of f (see evaluation)\n\nwhere f and the proximal maps proxes_f can also be given directly as a ManifoldProximalMapObjective mpo\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Random) or the default linear one.\nλ=iter -> 1/iter: a function returning the (square summable but not summable) sequence of λ_i\nstopping_criterion=StopAfterIteration(5000)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/cyclic_proximal_point/#Manopt.cyclic_proximal_point!","page":"Cyclic Proximal Point","title":"Manopt.cyclic_proximal_point!","text":"cyclic_proximal_point(M, f, proxes_f, p; kwargs...)\ncyclic_proximal_point(M, mpo, p; kwargs...)\ncyclic_proximal_point!(M, f, proxes_f; kwargs...)\ncyclic_proximal_point!(M, mpo; kwargs...)\n\nperform a cyclic proximal point algorithm. This can be done in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal Mℝ to minimize\nproxes_f: an Array of proximal maps (Functions) (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of f (see evaluation)\n\nwhere f and the proximal maps proxes_f can also be given directly as a ManifoldProximalMapObjective mpo\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Random) or the default linear one.\nλ=iter -> 1/iter: a function returning the (square summable but not summable) sequence of λ_i\nstopping_criterion=StopAfterIteration(5000)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/cyclic_proximal_point/#sec-cppa-technical-details","page":"Cyclic Proximal Point","title":"Technical details","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"The cyclic_proximal_point solver requires no additional functions to be available for your manifold, besides the ones you use in the proximal maps.","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"By default, one of the stopping criteria is StopWhenChangeLess, which either requires","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified or the distance(M, p, q) for said default inverse retraction.","category":"page"},{"location":"solvers/cyclic_proximal_point/#State","page":"Cyclic Proximal Point","title":"State","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"CyclicProximalPointState","category":"page"},{"location":"solvers/cyclic_proximal_point/#Manopt.CyclicProximalPointState","page":"Cyclic Proximal Point","title":"Manopt.CyclicProximalPointState","text":"CyclicProximalPointState <: AbstractManoptSolverState\n\nstores options for the cyclic_proximal_point algorithm. These are the\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nλ: a function for the values of λ_k per iteration(cycle k\noder_type: whether to use a randomly permuted sequence (:FixedRandomOrder), a per cycle permuted sequence (:RandomOrder) or the default linear one.\n\nConstructor\n\nCyclicProximalPointState(M::AbstractManifold; kwargs...)\n\nGenerate the options\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\n\nKeyword arguments\n\nevaluation_order=:LinearOrder: soecify the order_type\nλ=i -> 1.0 / i a function to compute the λ_k k mathcal N,\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopAfterIteration(2000): a functor indicating that the stopping criterion is fulfilled\n\nSee also\n\ncyclic_proximal_point\n\n\n\n\n\n","category":"type"},{"location":"solvers/cyclic_proximal_point/#Debug-functions","page":"Cyclic Proximal Point","title":"Debug functions","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"DebugProximalParameter","category":"page"},{"location":"solvers/cyclic_proximal_point/#Manopt.DebugProximalParameter","page":"Cyclic Proximal Point","title":"Manopt.DebugProximalParameter","text":"DebugProximalParameter <: DebugAction\n\nprint the current iterates proximal point algorithm parameter given by AbstractManoptSolverStates o.λ.\n\n\n\n\n\n","category":"type"},{"location":"solvers/cyclic_proximal_point/#Record-functions","page":"Cyclic Proximal Point","title":"Record functions","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"RecordProximalParameter","category":"page"},{"location":"solvers/cyclic_proximal_point/#Manopt.RecordProximalParameter","page":"Cyclic Proximal Point","title":"Manopt.RecordProximalParameter","text":"RecordProximalParameter <: RecordAction\n\nrecord the current iterates proximal point algorithm parameter given by in AbstractManoptSolverStates o.λ.\n\n\n\n\n\n","category":"type"},{"location":"solvers/cyclic_proximal_point/#Literature","page":"Cyclic Proximal Point","title":"Literature","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"M. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.\n\n\n\n","category":"page"},{"location":"plans/objective/#A-manifold-objective","page":"Objective","title":"A manifold objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"The Objective describes that actual cost function and all its properties.","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractManifoldObjective\nAbstractDecoratedManifoldObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractManifoldObjective","page":"Objective","title":"Manopt.AbstractManifoldObjective","text":"AbstractManifoldObjective{E<:AbstractEvaluationType}\n\nDescribe the collection of the optimization function f mathcal M ℝ (or even a vectorial range) and its corresponding elements, which might for example be a gradient or (one or more) proximal maps.\n\nAll these elements should usually be implemented as functions (M, p) -> ..., or (M, X, p) -> ... that is\n\nthe first argument of these functions should be the manifold M they are defined on\nthe argument X is present, if the computation is performed in-place of X (see InplaceEvaluation)\nthe argument p is the place the function (f or one of its elements) is evaluated at.\n\nthe type T indicates the global AbstractEvaluationType.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.AbstractDecoratedManifoldObjective","page":"Objective","title":"Manopt.AbstractDecoratedManifoldObjective","text":"AbstractDecoratedManifoldObjective{E<:AbstractEvaluationType,O<:AbstractManifoldObjective}\n\nA common supertype for all decorators of AbstractManifoldObjectives to simplify dispatch. The second parameter should refer to the undecorated objective (the most inner one).\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Which has two main different possibilities for its containing functions concerning the evaluation mode, not necessarily the cost, but for example gradient in an AbstractManifoldGradientObjective.","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractEvaluationType\nAllocatingEvaluation\nInplaceEvaluation\nevaluation_type","category":"page"},{"location":"plans/objective/#Manopt.AbstractEvaluationType","page":"Objective","title":"Manopt.AbstractEvaluationType","text":"AbstractEvaluationType\n\nAn abstract type to specify the kind of evaluation a AbstractManifoldObjective supports.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.AllocatingEvaluation","page":"Objective","title":"Manopt.AllocatingEvaluation","text":"AllocatingEvaluation <: AbstractEvaluationType\n\nA parameter for a AbstractManoptProblem indicating that the problem uses functions that allocate memory for their result, they work out of place.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.InplaceEvaluation","page":"Objective","title":"Manopt.InplaceEvaluation","text":"InplaceEvaluation <: AbstractEvaluationType\n\nA parameter for a AbstractManoptProblem indicating that the problem uses functions that do not allocate memory but work on their input, in place.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.evaluation_type","page":"Objective","title":"Manopt.evaluation_type","text":"evaluation_type(mp::AbstractManoptProblem)\n\nGet the AbstractEvaluationType of the objective in AbstractManoptProblem mp.\n\n\n\n\n\nevaluation_type(::AbstractManifoldObjective{Teval})\n\nGet the AbstractEvaluationType of the objective.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Decorators-for-objectives","page":"Objective","title":"Decorators for objectives","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"An objective can be decorated using the following trait and function to initialize","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"dispatch_objective_decorator\nis_objective_decorator\ndecorate_objective!","category":"page"},{"location":"plans/objective/#Manopt.dispatch_objective_decorator","page":"Objective","title":"Manopt.dispatch_objective_decorator","text":"dispatch_objective_decorator(o::AbstractManoptSolverState)\n\nIndicate internally, whether an AbstractManifoldObjective o to be of decorating type, it stores (encapsulates) an object in itself, by default in the field o.objective.\n\nDecorators indicate this by returning Val{true} for further dispatch.\n\nThe default is Val{false}, so by default an state is not decorated.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.is_objective_decorator","page":"Objective","title":"Manopt.is_objective_decorator","text":"is_object_decorator(s::AbstractManifoldObjective)\n\nIndicate, whether AbstractManifoldObjective s are of decorator type.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.decorate_objective!","page":"Objective","title":"Manopt.decorate_objective!","text":"decorate_objective!(M, o::AbstractManifoldObjective)\n\ndecorate the AbstractManifoldObjectiveo with specific decorators.\n\nOptional arguments\n\noptional arguments provide necessary details on the decorators. A specific one is used to activate certain decorators.\n\ncache=missing: specify a cache. Currently :Simple is supported and :LRU if you load LRUCache.jl. For this case a tuple specifying what to cache and how many can be provided, has to be specified. For example (:LRU, [:Cost, :Gradient], 10) states that the last 10 used cost function evaluations and gradient evaluations should be stored. See objective_cache_factory for details.\ncount=missing: specify calls to the objective to be called, see ManifoldCountObjective for the full list\nobjective_type=:Riemannian: specify that an objective is :Riemannian or :Euclidean. The :Euclidean symbol is equivalent to specifying it as :Embedded, since in the end, both refer to converting an objective from the embedding (whether its Euclidean or not) to the Riemannian one.\n\nSee also\n\nobjective_cache_factory\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#subsection-embedded-objectives","page":"Objective","title":"Embedded objectives","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"EmbeddedManifoldObjective","category":"page"},{"location":"plans/objective/#Manopt.EmbeddedManifoldObjective","page":"Objective","title":"Manopt.EmbeddedManifoldObjective","text":"EmbeddedManifoldObjective{P, T, E, O2, O1<:AbstractManifoldObjective{E}} <:\n AbstractDecoratedManifoldObjective{E,O2}\n\nDeclare an objective to be defined in the embedding. This also declares the gradient to be defined in the embedding, and especially being the Riesz representer with respect to the metric in the embedding. The types can be used to still dispatch on also the undecorated objective type O2.\n\nFields\n\nobjective: the objective that is defined in the embedding\np=nothing: a point in the embedding.\nX=nothing: a tangent vector in the embedding\n\nWhen a point in the embedding p is provided, embed! is used in place of this point to reduce memory allocations. Similarly X is used when embedding tangent vectors\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#subsection-cache-objective","page":"Objective","title":"Cache objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Since single function calls, for example to the cost or the gradient, might be expensive, a simple cache objective exists as a decorator, that caches one cost value or gradient.","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"It can be activated/used with the cache= keyword argument available for every solver.","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.reset_counters!\nManopt.objective_cache_factory","category":"page"},{"location":"plans/objective/#Manopt.reset_counters!","page":"Objective","title":"Manopt.reset_counters!","text":"reset_counters(co::ManifoldCountObjective, value::Integer=0)\n\nReset all values in the count objective to value.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.objective_cache_factory","page":"Objective","title":"Manopt.objective_cache_factory","text":"objective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Symbol)\n\nGenerate a cached variant of the AbstractManifoldObjective o on the AbstractManifold M based on the symbol cache.\n\nThe following caches are available\n\n:Simple generates a SimpleManifoldCachedObjective\n:LRU generates a ManifoldCachedObjective where you should use the form (:LRU, [:Cost, :Gradient]) to specify what should be cached or (:LRU, [:Cost, :Gradient], 100) to specify the cache size. Here this variant defaults to (:LRU, [:Cost, :Gradient], 100), caching up to 100 cost and gradient values.[1]\n\n[1]: This cache requires LRUCache.jl to be loaded as well.\n\n\n\n\n\nobjective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Tuple{Symbol, Array, Array})\nobjective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Tuple{Symbol, Array})\n\nGenerate a cached variant of the AbstractManifoldObjective o on the AbstractManifold M based on the symbol cache[1], where the second element cache[2] are further arguments to the cache and the optional third is passed down as keyword arguments.\n\nFor all available caches see the simpler variant with symbols.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#A-simple-cache","page":"Objective","title":"A simple cache","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"A first generic cache is always available, but it only caches one gradient and one cost function evaluation (for the same point).","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"SimpleManifoldCachedObjective","category":"page"},{"location":"plans/objective/#Manopt.SimpleManifoldCachedObjective","page":"Objective","title":"Manopt.SimpleManifoldCachedObjective","text":" SimpleManifoldCachedObjective{O<:AbstractManifoldGradientObjective{E,TC,TG}, P, T,C} <: AbstractManifoldGradientObjective{E,TC,TG}\n\nProvide a simple cache for an AbstractManifoldGradientObjective that is for a given point p this cache stores a point p and a gradient operatornamegrad f(p) in X as well as a cost value f(p) in c.\n\nBoth X and c are accompanied by booleans to keep track of their validity.\n\nConstructor\n\nSimpleManifoldCachedObjective(M::AbstractManifold, obj::AbstractManifoldGradientObjective; kwargs...)\n\nKeyword arguments\n\np=rand(M): a point on the manifold to initialize the cache with\nX=get_gradient(M, obj, p) or zero_vector(M,p): a tangent vector to store the gradient in, see also initialize=\nc=[get_cost](@ref)(M, obj, p)or0.0: a value to store the cost function ininitialize`\ninitialized=true: whether to initialize the cached X and c or not.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#A-generic-cache","page":"Objective","title":"A generic cache","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"For the more advanced cache, you need to implement some type of cache yourself, that provides a get! and implement init_caches. This is for example provided if you load LRUCache.jl. Then you obtain","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ManifoldCachedObjective\ninit_caches","category":"page"},{"location":"plans/objective/#Manopt.ManifoldCachedObjective","page":"Objective","title":"Manopt.ManifoldCachedObjective","text":"ManifoldCachedObjective{E,P,O<:AbstractManifoldObjective{<:E},C<:NamedTuple{}} <: AbstractDecoratedManifoldObjective{E,P}\n\nCreate a cache for an objective, based on a NamedTuple that stores some kind of cache.\n\nConstructor\n\nManifoldCachedObjective(M, o::AbstractManifoldObjective, caches::Vector{Symbol}; kwargs...)\n\nCreate a cache for the AbstractManifoldObjective where the Symbols in caches indicate, which function evaluations to cache.\n\nSupported symbols\n\nSymbol Caches calls to (incl. ! variants) Comment\n:Cost get_cost \n:EqualityConstraint get_equality_constraint(M, p, i) \n:EqualityConstraints get_equality_constraint(M, p, :) \n:GradEqualityConstraint get_grad_equality_constraint tangent vector per (p,i)\n:GradInequalityConstraint get_inequality_constraint tangent vector per (p,i)\n:Gradient get_gradient(M,p) tangent vectors\n:Hessian get_hessian tangent vectors\n:InequalityConstraint get_inequality_constraint(M, p, j) \n:InequalityConstraints get_inequality_constraint(M, p, :) \n:Preconditioner get_preconditioner tangent vectors\n:ProximalMap get_proximal_map point per (p,λ,i)\n:StochasticGradients get_gradients vector of tangent vectors\n:StochasticGradient get_gradient(M, p, i) tangent vector per (p,i)\n:SubGradient get_subgradient tangent vectors\n:SubtrahendGradient get_subtrahend_gradient tangent vectors\n\nKeyword arguments\n\np=rand(M): the type of the keys to be used in the caches. Defaults to the default representation on M.\nvalue=get_cost(M, objective, p): the type of values for numeric values in the cache\nX=zero_vector(M,p): the type of values to be cached for gradient and Hessian calls.\ncache=[:Cost]: a vector of symbols indicating which function calls should be cached.\ncache_size=10: number of (least recently used) calls to cache\ncache_sizes=Dict{Symbol,Int}(): a named tuple or dictionary specifying the sizes individually for each cache.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.init_caches","page":"Objective","title":"Manopt.init_caches","text":"init_caches(caches, T::Type{LRU}; kwargs...)\n\nGiven a vector of symbols caches, this function sets up the NamedTuple of caches, where T is the type of cache to use.\n\nKeyword arguments\n\np=rand(M): a point on a manifold, to both infer its type for keys and initialize caches\nvalue=0.0: a value both typing and initialising number-caches, the default is for (Float) values like the cost.\nX=zero_vector(M, p): a tangent vector at p to both type and initialize tangent vector caches\ncache_size=10: a default cache size to use\ncache_sizes=Dict{Symbol,Int}(): a dictionary of sizes for the caches to specify different (non-default) sizes\n\n\n\n\n\ninit_caches(M::AbstractManifold, caches, T; kwargs...)\n\nGiven a vector of symbols caches, this function sets up the NamedTuple of caches for points/vectors on M, where T is the type of cache to use.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#subsection-count-objective","page":"Objective","title":"Count objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ManifoldCountObjective","category":"page"},{"location":"plans/objective/#Manopt.ManifoldCountObjective","page":"Objective","title":"Manopt.ManifoldCountObjective","text":"ManifoldCountObjective{E,P,O<:AbstractManifoldObjective,I<:Integer} <: AbstractDecoratedManifoldObjective{E,P}\n\nA wrapper for any AbstractManifoldObjective of type O to count different calls to parts of the objective.\n\nFields\n\ncounts a dictionary of symbols mapping to integers keeping the counted values\nobjective the wrapped objective\n\nSupported symbols\n\nSymbol Counts calls to (incl. ! variants) Comment\n:Cost get_cost \n:EqualityConstraint get_equality_constraint requires vector of counters\n:EqualityConstraints get_equality_constraint when evaluating all of them with :\n:GradEqualityConstraint get_grad_equality_constraint requires vector of counters\n:GradEqualityConstraints get_grad_equality_constraint when evaluating all of them with :\n:GradInequalityConstraint get_inequality_constraint requires vector of counters\n:GradInequalityConstraints get_inequality_constraint when evaluating all of them with :\n:Gradient get_gradient(M,p) \n:Hessian get_hessian \n:InequalityConstraint get_inequality_constraint requires vector of counters\n:InequalityConstraints get_inequality_constraint when evaluating all of them with :\n:Preconditioner get_preconditioner \n:ProximalMap get_proximal_map \n:StochasticGradients get_gradients \n:StochasticGradient get_gradient(M, p, i) \n:SubGradient get_subgradient \n:SubtrahendGradient get_subtrahend_gradient \n\nConstructors\n\nManifoldCountObjective(objective::AbstractManifoldObjective, counts::Dict{Symbol, <:Integer})\n\nInitialise the ManifoldCountObjective to wrap objective initializing the set of counts\n\nManifoldCountObjective(M::AbstractManifold, objective::AbstractManifoldObjective, count::AbstractVecor{Symbol}, init=0)\n\nCount function calls on objective using the symbols in count initialising all entries to init.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Internal-decorators","page":"Objective","title":"Internal decorators","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ReturnManifoldObjective","category":"page"},{"location":"plans/objective/#Manopt.ReturnManifoldObjective","page":"Objective","title":"Manopt.ReturnManifoldObjective","text":"ReturnManifoldObjective{E,O2,O1<:AbstractManifoldObjective{E}} <:\n AbstractDecoratedManifoldObjective{E,O2}\n\nA wrapper to indicate that get_solver_result should return the inner objective.\n\nThe types are such that one can still dispatch on the undecorated type O2 of the original objective as well.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Specific-Objective-typed-and-their-access-functions","page":"Objective","title":"Specific Objective typed and their access functions","text":"","category":"section"},{"location":"plans/objective/#Cost-objective","page":"Objective","title":"Cost objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractManifoldCostObjective\nManifoldCostObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractManifoldCostObjective","page":"Objective","title":"Manopt.AbstractManifoldCostObjective","text":"AbstractManifoldCostObjective{T<:AbstractEvaluationType} <: AbstractManifoldObjective{T}\n\nRepresenting objectives on manifolds with a cost function implemented.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ManifoldCostObjective","page":"Objective","title":"Manopt.ManifoldCostObjective","text":"ManifoldCostObjective{T, TC} <: AbstractManifoldCostObjective{T, TC}\n\nspecify an AbstractManifoldObjective that does only have information about the cost function f mathbb M ℝ implemented as a function (M, p) -> c to compute the cost value c at p on the manifold M.\n\ncost: a function f mathcal M ℝ to minimize\n\nConstructors\n\nManifoldCostObjective(f)\n\nGenerate a problem. While this Problem does not have any allocating functions, the type T can be set for consistency reasons with other problems.\n\nUsed with\n\nNelderMead, particle_swarm\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_cost","category":"page"},{"location":"plans/objective/#Manopt.get_cost","page":"Objective","title":"Manopt.get_cost","text":"get_cost(amp::AbstractManoptProblem, p)\n\nevaluate the cost function f stored within the AbstractManifoldObjective of an AbstractManoptProblem amp at the point p.\n\n\n\n\n\nget_cost(M::AbstractManifold, obj::AbstractManifoldObjective, p)\n\nevaluate the cost function f defined on M stored within the AbstractManifoldObjective at the point p.\n\n\n\n\n\nget_cost(M::AbstractManifold, mco::AbstractManifoldCostObjective, p)\n\nEvaluate the cost function from within the AbstractManifoldCostObjective on M at p.\n\nBy default this implementation assumed that the cost is stored within mco.cost.\n\n\n\n\n\nget_cost(TpM, trmo::TrustRegionModelObjective, X)\n\nEvaluate the tangent space TrustRegionModelObjective\n\nm(X) = f(p) + operatornamegrad f(p) X _p + frac12 operatornameHess f(p)X X_p\n\n\n\n\n\nget_cost(TpM, trmo::AdaptiveRagularizationWithCubicsModelObjective, X)\n\nEvaluate the tangent space AdaptiveRagularizationWithCubicsModelObjective\n\nm(X) = f(p) + operatornamegrad f(p) X _p + frac12 operatornameHess f(p)X X_p\n + fracσ3 lVert X rVert^3\n\nat X, cf. Eq. (33) in [ABBC20].\n\n\n\n\n\nget_cost(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)\n\nevaluate the cost\n\nf(X) = frac12 lVert mathcal AX + b rVert_p^2qquad X T_pmathcal M\n\nat X.\n\n\n\n\n\nget_cost(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p, i)\n\nEvaluate the ith summand of the cost.\n\nIf you use a single function for the stochastic cost, then only the index ì=1` is available to evaluate the whole cost.\n\n\n\n\n\nget_cost(M::AbstractManifold,emo::EmbeddedManifoldObjective, p)\n\nEvaluate the cost function of an objective defined in the embedding by first embedding p before calling the cost function stored in the EmbeddedManifoldObjective.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"and internally","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_cost_function","category":"page"},{"location":"plans/objective/#Manopt.get_cost_function","page":"Objective","title":"Manopt.get_cost_function","text":"get_cost_function(amco::AbstractManifoldCostObjective)\n\nreturn the function to evaluate (just) the cost f(p)=c as a function (M,p) -> c.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Gradient-objectives","page":"Objective","title":"Gradient objectives","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractManifoldGradientObjective\nManifoldGradientObjective\nManifoldAlternatingGradientObjective\nManifoldStochasticGradientObjective\nNonlinearLeastSquaresObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractManifoldGradientObjective","page":"Objective","title":"Manopt.AbstractManifoldGradientObjective","text":"AbstractManifoldGradientObjective{E<:AbstractEvaluationType, TC, TG} <: AbstractManifoldCostObjective{E, TC}\n\nAn abstract type for all objectives that provide a (full) gradient, where T is a AbstractEvaluationType for the gradient function.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ManifoldGradientObjective","page":"Objective","title":"Manopt.ManifoldGradientObjective","text":"ManifoldGradientObjective{T<:AbstractEvaluationType} <: AbstractManifoldGradientObjective{T}\n\nspecify an objective containing a cost and its gradient\n\nFields\n\ncost: a function f mathcal M ℝ\ngradient!!: the gradient operatornamegradf mathcal M mathcal Tmathcal M of the cost function f.\n\nDepending on the AbstractEvaluationType T the gradient can have to forms\n\nas a function (M, p) -> X that allocates memory for X, an AllocatingEvaluation\nas a function (M, X, p) -> X that work in place of X, an InplaceEvaluation\n\nConstructors\n\nManifoldGradientObjective(cost, gradient; evaluation=AllocatingEvaluation())\n\nUsed with\n\ngradient_descent, conjugate_gradient_descent, quasi_Newton\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ManifoldAlternatingGradientObjective","page":"Objective","title":"Manopt.ManifoldAlternatingGradientObjective","text":"ManifoldAlternatingGradientObjective{E<:AbstractEvaluationType,TCost,TGradient} <: AbstractManifoldGradientObjective{E}\n\nAn alternating gradient objective consists of\n\na cost function F(x)\na gradient operatornamegradF that is either\ngiven as one function operatornamegradF returning a tangent vector X on M or\nan array of gradient functions operatornamegradF_i, ì=1,…,n s each returning a component of the gradient\nwhich might be allocating or mutating variants, but not a mix of both.\n\nnote: Note\nThis Objective is usually defined using the ProductManifold from Manifolds.jl, so Manifolds.jl to be loaded.\n\nConstructors\n\nManifoldAlternatingGradientObjective(F, gradF::Function;\n evaluation=AllocatingEvaluation()\n)\nManifoldAlternatingGradientObjective(F, gradF::AbstractVector{<:Function};\n evaluation=AllocatingEvaluation()\n)\n\nCreate a alternating gradient problem with an optional cost and the gradient either as one function (returning an array) or a vector of functions.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ManifoldStochasticGradientObjective","page":"Objective","title":"Manopt.ManifoldStochasticGradientObjective","text":"ManifoldStochasticGradientObjective{T<:AbstractEvaluationType} <: AbstractManifoldGradientObjective{T}\n\nA stochastic gradient objective consists of\n\na(n optional) cost function f(p) = displaystylesum_i=1^n f_i(p)\nan array of gradients, operatornamegradf_i(p) i=1ldotsn which can be given in two forms\nas one single function (mathcal M p) (X_1X_n) (T_pmathcal M)^n\nas a vector of functions bigl( (mathcal M p) X_1 (mathcal M p) X_nbigr).\n\nWhere both variants can also be provided as InplaceEvaluation functions (M, X, p) -> X, where X is the vector of X1,...,Xn and (M, X1, p) -> X1, ..., (M, Xn, p) -> Xn, respectively.\n\nConstructors\n\nManifoldStochasticGradientObjective(\n grad_f::Function;\n cost=Missing(),\n evaluation=AllocatingEvaluation()\n)\nManifoldStochasticGradientObjective(\n grad_f::AbstractVector{<:Function};\n cost=Missing(), evaluation=AllocatingEvaluation()\n)\n\nCreate a Stochastic gradient problem with the gradient either as one function (returning an array of tangent vectors) or a vector of functions (each returning one tangent vector).\n\nThe optional cost can also be given as either a single function (returning a number) pr a vector of functions, each returning a value.\n\nUsed with\n\nstochastic_gradient_descent\n\nNote that this can also be used with a gradient_descent, since the (complete) gradient is just the sums of the single gradients.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.NonlinearLeastSquaresObjective","page":"Objective","title":"Manopt.NonlinearLeastSquaresObjective","text":"NonlinearLeastSquaresObjective{T<:AbstractEvaluationType} <: AbstractManifoldObjective{T}\n\nA type for nonlinear least squares problems. T is a AbstractEvaluationType for the F and Jacobian functions.\n\nSpecify a nonlinear least squares problem\n\nFields\n\nf a function f mathcal M ℝ^d to minimize\njacobian!! Jacobian of the function f\njacobian_tangent_basis the basis of tangent space used for computing the Jacobian.\nnum_components number of values returned by f (equal to d).\n\nDepending on the AbstractEvaluationType T the function F has to be provided:\n\nas a functions (M::AbstractManifold, p) -> v that allocates memory for v itself for an AllocatingEvaluation,\nas a function (M::AbstractManifold, v, p) -> v that works in place of v for a InplaceEvaluation.\n\nAlso the Jacobian jacF is required:\n\nas a functions (M::AbstractManifold, p; basis_domain::AbstractBasis) -> v that allocates memory for v itself for an AllocatingEvaluation,\nas a function (M::AbstractManifold, v, p; basis_domain::AbstractBasis) -> v that works in place of v for an InplaceEvaluation.\n\nConstructors\n\nNonlinearLeastSquaresProblem(M, F, jacF, num_components; evaluation=AllocatingEvaluation(), jacobian_tangent_basis=DefaultOrthonormalBasis())\n\nSee also\n\nLevenbergMarquardt, LevenbergMarquardtState\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"There is also a second variant, if just one function is responsible for computing the cost and the gradient","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ManifoldCostGradientObjective","category":"page"},{"location":"plans/objective/#Manopt.ManifoldCostGradientObjective","page":"Objective","title":"Manopt.ManifoldCostGradientObjective","text":"ManifoldCostGradientObjective{T} <: AbstractManifoldObjective{T}\n\nspecify an objective containing one function to perform a combined computation of cost and its gradient\n\nFields\n\ncostgrad!!: a function that computes both the cost f mathcal M ℝ and its gradient operatornamegradf mathcal M mathcal Tmathcal M\n\nDepending on the AbstractEvaluationType T the gradient can have to forms\n\nas a function (M, p) -> (c, X) that allocates memory for the gradient X, an AllocatingEvaluation\nas a function (M, X, p) -> (c, X) that work in place of X, an InplaceEvaluation\n\nConstructors\n\nManifoldCostGradientObjective(costgrad; evaluation=AllocatingEvaluation())\n\nUsed with\n\ngradient_descent, conjugate_gradient_descent, quasi_Newton\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-2","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_gradient\nget_gradients","category":"page"},{"location":"plans/objective/#Manopt.get_gradient","page":"Objective","title":"Manopt.get_gradient","text":"get_gradient(s::AbstractManoptSolverState)\n\nreturn the (last stored) gradient within AbstractManoptSolverStates`. By default also undecorates the state beforehand\n\n\n\n\n\nget_gradient(amp::AbstractManoptProblem, p)\nget_gradient!(amp::AbstractManoptProblem, X, p)\n\nevaluate the gradient of an AbstractManoptProblem amp at the point p.\n\nThe evaluation is done in place of X for the !-variant.\n\n\n\n\n\nget_gradient(M::AbstractManifold, mgo::AbstractManifoldGradientObjective{T}, p)\nget_gradient!(M::AbstractManifold, X, mgo::AbstractManifoldGradientObjective{T}, p)\n\nevaluate the gradient of a AbstractManifoldGradientObjective{T} mgo at p.\n\nThe evaluation is done in place of X for the !-variant. The T=AllocatingEvaluation problem might still allocate memory within. When the non-mutating variant is called with a T=InplaceEvaluation memory for the result is allocated.\n\nNote that the order of parameters follows the philosophy of Manifolds.jl, namely that even for the mutating variant, the manifold is the first parameter and the (in-place) tangent vector X comes second.\n\n\n\n\n\nget_gradient(agst::AbstractGradientSolverState)\n\nreturn the gradient stored within gradient options. THe default returns agst.X.\n\n\n\n\n\nget_gradient(M::AbstractManifold, vgf::VectorGradientFunction, p, i)\nget_gradient(M::AbstractManifold, vgf::VectorGradientFunction, p, i, range)\nget_gradient!(M::AbstractManifold, X, vgf::VectorGradientFunction, p, i)\nget_gradient!(M::AbstractManifold, X, vgf::VectorGradientFunction, p, i, range)\n\nEvaluate the gradients of the vector function vgf on the manifold M at p and the values given in range, specifying the representation of the gradients.\n\nSince i is assumed to be a linear index, you can provide\n\na single integer\na UnitRange to specify a range to be returned like 1:3\na BitVector specifying a selection\na AbstractVector{<:Integer} to specify indices\n: to return the vector of all gradients\n\n\n\n\n\nget_gradient(TpM, trmo::TrustRegionModelObjective, X)\n\nEvaluate the gradient of the TrustRegionModelObjective\n\noperatornamegrad m(X) = operatornamegrad f(p) + operatornameHess f(p)X\n\n\n\n\n\nget_gradient(TpM, trmo::AdaptiveRagularizationWithCubicsModelObjective, X)\n\nEvaluate the gradient of the AdaptiveRagularizationWithCubicsModelObjective\n\noperatornamegrad m(X) = operatornamegrad f(p) + operatornameHess f(p)X\n + σlVert X rVert X\n\nat X, cf. Eq. (37) in [ABBC20].\n\n\n\n\n\nget_gradient(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)\nget_gradient!(TpM::TangentSpace, Y, slso::SymmetricLinearSystemObjective, X)\n\nevaluate the gradient of\n\nf(X) = frac12 lVert mathcal AX + b rVert_p^2qquad X T_pmathcal M\n\nWhich is operatornamegrad f(X) = mathcal AX+b. This can be computed in-place of Y.\n\n\n\n\n\nget_gradient(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p, k)\nget_gradient!(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, Y, p, k)\n\nEvaluate one of the summands gradients operatornamegradf_k, k1n, at x (in place of Y).\n\nIf you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient required for allocation) can not be determined.\n\n\n\n\n\nget_gradient(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p)\nget_gradient!(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, X, p)\n\nEvaluate the complete gradient operatornamegrad f = displaystylesum_i=1^n operatornamegrad f_i(p) at p (in place of X).\n\nIf you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient required for allocation) can not be determined.\n\n\n\n\n\nget_gradient(M::AbstractManifold, emo::EmbeddedManifoldObjective, p)\nget_gradient!(M::AbstractManifold, X, emo::EmbeddedManifoldObjective, p)\n\nEvaluate the gradient function of an objective defined in the embedding, that is embed p before calling the gradient function stored in the EmbeddedManifoldObjective.\n\nThe returned gradient is then converted to a Riemannian gradient calling riemannian_gradient.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_gradients","page":"Objective","title":"Manopt.get_gradients","text":"get_gradients(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p)\nget_gradients!(M::AbstractManifold, X, sgo::ManifoldStochasticGradientObjective, p)\n\nEvaluate all summands gradients operatornamegradf_i_i=1^n at p (in place of X).\n\nIf you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient) can not be determined.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"and internally","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_gradient_function","category":"page"},{"location":"plans/objective/#Manopt.get_gradient_function","page":"Objective","title":"Manopt.get_gradient_function","text":"get_gradient_function(amgo::AbstractManifoldGradientObjective, recursive=false)\n\nreturn the function to evaluate (just) the gradient operatornamegrad f(p), where either the gradient function using the decorator or without the decorator is used.\n\nBy default recursive is set to false, since usually to just pass the gradient function somewhere, one still wants for example the cached one or the one that still counts calls.\n\nDepending on the AbstractEvaluationType E this is a function\n\n(M, p) -> X for the AllocatingEvaluation case\n(M, X, p) -> X for the InplaceEvaluation working in-place of X.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Internal-helpers","page":"Objective","title":"Internal helpers","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_gradient_from_Jacobian!","category":"page"},{"location":"plans/objective/#Manopt.get_gradient_from_Jacobian!","page":"Objective","title":"Manopt.get_gradient_from_Jacobian!","text":"get_gradient_from_Jacobian!(\n M::AbstractManifold,\n X,\n nlso::NonlinearLeastSquaresObjective{InplaceEvaluation},\n p,\n Jval=zeros(nlso.num_components, manifold_dimension(M)),\n)\n\nCompute gradient of NonlinearLeastSquaresObjective nlso at point p in place of X, with temporary Jacobian stored in the optional argument Jval.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Subgradient-objective","page":"Objective","title":"Subgradient objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ManifoldSubgradientObjective","category":"page"},{"location":"plans/objective/#Manopt.ManifoldSubgradientObjective","page":"Objective","title":"Manopt.ManifoldSubgradientObjective","text":"ManifoldSubgradientObjective{T<:AbstractEvaluationType,C,S} <:AbstractManifoldCostObjective{T, C}\n\nA structure to store information about a objective for a subgradient based optimization problem\n\nFields\n\ncost: the function f to be minimized\nsubgradient: a function returning a subgradient f of f\n\nConstructor\n\nManifoldSubgradientObjective(f, ∂f)\n\nGenerate the ManifoldSubgradientObjective for a subgradient objective, consisting of a (cost) function f(M, p) and a function ∂f(M, p) that returns a not necessarily deterministic element from the subdifferential at p on a manifold M.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-3","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_subgradient","category":"page"},{"location":"plans/objective/#Manopt.get_subgradient","page":"Objective","title":"Manopt.get_subgradient","text":"X = get_subgradient(M::AbstractManifold, sgo::AbstractManifoldGradientObjective, p)\nget_subgradient!(M::AbstractManifold, X, sgo::AbstractManifoldGradientObjective, p)\n\nEvaluate the subgradient, which for the case of a objective having a gradient, means evaluating the gradient itself.\n\nWhile in general, the result might not be deterministic, for this case it is.\n\n\n\n\n\nget_subgradient(amp::AbstractManoptProblem, p)\nget_subgradient!(amp::AbstractManoptProblem, X, p)\n\nevaluate the subgradient of an AbstractManoptProblem amp at point p.\n\nThe evaluation is done in place of X for the !-variant. The result might not be deterministic, one element of the subdifferential is returned.\n\n\n\n\n\nX = get_subgradient(M;;AbstractManifold, sgo::ManifoldSubgradientObjective, p)\nget_subgradient!(M;;AbstractManifold, X, sgo::ManifoldSubgradientObjective, p)\n\nEvaluate the (sub)gradient of a ManifoldSubgradientObjective sgo at the point p.\n\nThe evaluation is done in place of X for the !-variant. The result might not be deterministic, one element of the subdifferential is returned.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Proximal-map-objective","page":"Objective","title":"Proximal map objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ManifoldProximalMapObjective","category":"page"},{"location":"plans/objective/#Manopt.ManifoldProximalMapObjective","page":"Objective","title":"Manopt.ManifoldProximalMapObjective","text":"ManifoldProximalMapObjective{E<:AbstractEvaluationType, TC, TP, V <: Vector{<:Integer}} <: AbstractManifoldCostObjective{E, TC}\n\nspecify a problem for solvers based on the evaluation of proximal maps, which represents proximal maps operatornameprox_λf_i for summands f = f_1 + f_2+ + f_N of the cost function f.\n\nFields\n\ncost: a function fmathcal Mℝ to minimize\nproxes: proximal maps operatornameprox_λf_imathcal M mathcal M as functions (M, λ, p) -> q or in-place (M, q, λ, p).\nnumber_of_proxes: number of proximal maps per function, to specify when one of the maps is a combined one such that the proximal maps functions return more than one entry per function, you have to adapt this value. if not specified, it is set to one prox per function.\n\nConstructor\n\nManifoldProximalMapObjective(f, proxes_f::Union{Tuple,AbstractVector}, numer_of_proxes=onex(length(proxes));\n evaluation=Allocating)\n\nGenerate a proximal problem with a tuple or vector of funtions, where by default every function computes a single prox of one component of f.\n\nManifoldProximalMapObjective(f, prox_f); evaluation=Allocating)\n\nGenerate a proximal objective for f and its proxial map operatornameprox_λf\n\nSee also\n\ncyclic_proximal_point, get_cost, get_proximal_map\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-4","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_proximal_map","category":"page"},{"location":"plans/objective/#Manopt.get_proximal_map","page":"Objective","title":"Manopt.get_proximal_map","text":"q = get_proximal_map(M::AbstractManifold, mpo::ManifoldProximalMapObjective, λ, p)\nget_proximal_map!(M::AbstractManifold, q, mpo::ManifoldProximalMapObjective, λ, p)\nq = get_proximal_map(M::AbstractManifold, mpo::ManifoldProximalMapObjective, λ, p, i)\nget_proximal_map!(M::AbstractManifold, q, mpo::ManifoldProximalMapObjective, λ, p, i)\n\nevaluate the (ith) proximal map of ManifoldProximalMapObjective p at the point p of p.M with parameter λ0.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Hessian-objective","page":"Objective","title":"Hessian objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractManifoldHessianObjective\nManifoldHessianObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractManifoldHessianObjective","page":"Objective","title":"Manopt.AbstractManifoldHessianObjective","text":"AbstractManifoldHessianObjective{T<:AbstractEvaluationType,TC,TG,TH} <: AbstractManifoldGradientObjective{T,TC,TG}\n\nAn abstract type for all objectives that provide a (full) Hessian, where T is a AbstractEvaluationType for the gradient and Hessian functions.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ManifoldHessianObjective","page":"Objective","title":"Manopt.ManifoldHessianObjective","text":"ManifoldHessianObjective{T<:AbstractEvaluationType,C,G,H,Pre} <: AbstractManifoldHessianObjective{T,C,G,H}\n\nspecify a problem for Hessian based algorithms.\n\nFields\n\ncost: a function fmathcal Mℝ to minimize\ngradient: the gradient operatornamegradfmathcal M mathcal Tmathcal M of the cost function f\nhessian: the Hessian operatornameHessf(x) mathcal T_x mathcal M mathcal T_x mathcal M of the cost function f\npreconditioner: the symmetric, positive definite preconditioner as an approximation of the inverse of the Hessian of f, a map with the same input variables as the hessian to numerically stabilize iterations when the Hessian is ill-conditioned\n\nDepending on the AbstractEvaluationType T the gradient and can have to forms\n\nas a function (M, p) -> X and (M, p, X) -> Y, resp., an AllocatingEvaluation\nas a function (M, X, p) -> X and (M, Y, p, X), resp., an InplaceEvaluation\n\nConstructor\n\nManifoldHessianObjective(f, grad_f, Hess_f, preconditioner = (M, p, X) -> X;\n evaluation=AllocatingEvaluation())\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-5","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_hessian\nget_preconditioner","category":"page"},{"location":"plans/objective/#Manopt.get_hessian","page":"Objective","title":"Manopt.get_hessian","text":"Y = get_hessian(amp::AbstractManoptProblem{T}, p, X)\nget_hessian!(amp::AbstractManoptProblem{T}, Y, p, X)\n\nevaluate the Hessian of an AbstractManoptProblem amp at p applied to a tangent vector X, computing operatornameHessf(q)X, which can also happen in-place of Y.\n\n\n\n\n\nget_hessian(M::AbstractManifold, vgf::VectorHessianFunction, p, X, i)\nget_hessian(M::AbstractManifold, vgf::VectorHessianFunction, p, X, i, range)\nget_hessian!(M::AbstractManifold, X, vgf::VectorHessianFunction, p, X, i)\nget_hessian!(M::AbstractManifold, X, vgf::VectorHessianFunction, p, X, i, range)\n\nEvaluate the Hessians of the vector function vgf on the manifold M at p in direction X and the values given in range, specifying the representation of the gradients.\n\nSince i is assumed to be a linear index, you can provide\n\na single integer\na UnitRange to specify a range to be returned like 1:3\na BitVector specifying a selection\na AbstractVector{<:Integer} to specify indices\n: to return the vector of all gradients\n\n\n\n\n\nget_hessian(TpM, trmo::TrustRegionModelObjective, X)\n\nEvaluate the Hessian of the TrustRegionModelObjective\n\noperatornameHess m(X)Y = operatornameHess f(p)Y\n\n\n\n\n\nget_Hessian(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X, V)\nget_Hessian!(TpM::TangentSpace, W, slso::SymmetricLinearSystemObjective, X, V)\n\nevaluate the Hessian of\n\nf(X) = frac12 lVert mathcal AX + b rVert_p^2qquad X T_pmathcal M\n\nWhich is operatornameHess f(X)Y = mathcal AV. This can be computed in-place of W.\n\n\n\n\n\nget_hessian(M::AbstractManifold, emo::EmbeddedManifoldObjective, p, X)\nget_hessian!(M::AbstractManifold, Y, emo::EmbeddedManifoldObjective, p, X)\n\nEvaluate the Hessian of an objective defined in the embedding, that is embed p and X before calling the Hessian function stored in the EmbeddedManifoldObjective.\n\nThe returned Hessian is then converted to a Riemannian Hessian calling riemannian_Hessian.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_preconditioner","page":"Objective","title":"Manopt.get_preconditioner","text":"get_preconditioner(amp::AbstractManoptProblem, p, X)\n\nevaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function f) of a AbstractManoptProblem amps objective at the point p applied to a tangent vector X.\n\n\n\n\n\nget_preconditioner(M::AbstractManifold, mho::ManifoldHessianObjective, p, X)\n\nevaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function F) of a ManifoldHessianObjective mho at the point p applied to a tangent vector X.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"and internally","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_hessian_function","category":"page"},{"location":"plans/objective/#Manopt.get_hessian_function","page":"Objective","title":"Manopt.get_hessian_function","text":"get_gradient_function(amgo::AbstractManifoldGradientObjective{E<:AbstractEvaluationType})\n\nreturn the function to evaluate (just) the Hessian operatornameHess f(p). Depending on the AbstractEvaluationType E this is a function\n\n(M, p, X) -> Y for the AllocatingEvaluation case\n(M, Y, p, X) -> X for the InplaceEvaluation, working in-place of Y.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Primal-dual-based-objectives","page":"Objective","title":"Primal-dual based objectives","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractPrimalDualManifoldObjective\nPrimalDualManifoldObjective\nPrimalDualManifoldSemismoothNewtonObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractPrimalDualManifoldObjective","page":"Objective","title":"Manopt.AbstractPrimalDualManifoldObjective","text":"AbstractPrimalDualManifoldObjective{E<:AbstractEvaluationType,C,P} <: AbstractManifoldCostObjective{E,C}\n\nA common abstract super type for objectives that consider primal-dual problems.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.PrimalDualManifoldObjective","page":"Objective","title":"Manopt.PrimalDualManifoldObjective","text":"PrimalDualManifoldObjective{T<:AbstractEvaluationType} <: AbstractPrimalDualManifoldObjective{T}\n\nDescribes an Objective linearized or exact Chambolle-Pock algorithm, cf. [BHS+21], [CP11]\n\nFields\n\nAll fields with !! can either be in-place or allocating functions, which should be set depending on the evaluation= keyword in the constructor and stored in T <: AbstractEvaluationType.\n\ncost: F + G(Λ()) to evaluate interim cost function values\nlinearized_forward_operator!!: linearized operator for the forward operation in the algorithm DΛ\nlinearized_adjoint_operator!!: the adjoint differential (DΛ)^* mathcal N Tmathcal M\nprox_f!!: the proximal map belonging to f\nprox_G_dual!!: the proximal map belonging to g_n^*\nΛ!!: the forward operator (if given) Λ mathcal M mathcal N\n\nEither the linearized operator DΛ or Λ are required usually.\n\nConstructor\n\nPrimalDualManifoldObjective(cost, prox_f, prox_G_dual, adjoint_linearized_operator;\n linearized_forward_operator::Union{Function,Missing}=missing,\n Λ::Union{Function,Missing}=missing,\n evaluation::AbstractEvaluationType=AllocatingEvaluation()\n)\n\nThe last optional argument can be used to provide the 4 or 5 functions as allocating or mutating (in place computation) ones. Note that the first argument is always the manifold under consideration, the mutated one is the second.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.PrimalDualManifoldSemismoothNewtonObjective","page":"Objective","title":"Manopt.PrimalDualManifoldSemismoothNewtonObjective","text":"PrimalDualManifoldSemismoothNewtonObjective{E<:AbstractEvaluationType, TC, LO, ALO, PF, DPF, PG, DPG, L} <: AbstractPrimalDualManifoldObjective{E, TC, PF}\n\nDescribes a Problem for the Primal-dual Riemannian semismooth Newton algorithm. [DL21]\n\nFields\n\ncost: F + G(Λ()) to evaluate interim cost function values\nlinearized_operator: the linearization DΛ() of the operator Λ().\nlinearized_adjoint_operator: the adjoint differential (DΛ)^* mathcal N Tmathcal M\nprox_F: the proximal map belonging to F\ndiff_prox_F: the (Clarke Generalized) differential of the proximal maps of F\nprox_G_dual: the proximal map belonging to G^\\ast_n`\ndiff_prox_dual_G: the (Clarke Generalized) differential of the proximal maps of G^ast_n\nΛ: the exact forward operator. This operator is required if Λ(m)=n does not hold.\n\nConstructor\n\nPrimalDualManifoldSemismoothNewtonObjective(cost, prox_F, prox_G_dual, forward_operator, adjoint_linearized_operator,Λ)\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-6","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"adjoint_linearized_operator\nforward_operator\nget_differential_dual_prox\nget_differential_primal_prox\nget_dual_prox\nget_primal_prox\nlinearized_forward_operator","category":"page"},{"location":"plans/objective/#Manopt.adjoint_linearized_operator","page":"Objective","title":"Manopt.adjoint_linearized_operator","text":"X = adjoint_linearized_operator(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, m, n, Y)\nadjoint_linearized_operator(N::AbstractManifold, X, apdmo::AbstractPrimalDualManifoldObjective, m, n, Y)\n\nEvaluate the adjoint of the linearized forward operator of (DΛ(m))^*Y stored within the AbstractPrimalDualManifoldObjective (in place of X). Since YT_nmathcal N, both m and n=Λ(m) are necessary arguments, mainly because the forward operator Λ might be missing in p.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.forward_operator","page":"Objective","title":"Manopt.forward_operator","text":"q = forward_operator(M::AbstractManifold, N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, p)\nforward_operator!(M::AbstractManifold, N::AbstractManifold, q, apdmo::AbstractPrimalDualManifoldObjective, p)\n\nEvaluate the forward operator of Λ(x) stored within the TwoManifoldProblem (in place of q).\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_differential_dual_prox","page":"Objective","title":"Manopt.get_differential_dual_prox","text":"η = get_differential_dual_prox(N::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective, n, τ, X, ξ)\nget_differential_dual_prox!(N::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective, η, n, τ, X, ξ)\n\nEvaluate the differential proximal map of G_n^* stored within PrimalDualManifoldSemismoothNewtonObjective\n\nDoperatornameprox_τG_n^*(X)ξ\n\nwhich can also be computed in place of η.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_differential_primal_prox","page":"Objective","title":"Manopt.get_differential_primal_prox","text":"y = get_differential_primal_prox(M::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective σ, x)\nget_differential_primal_prox!(p::TwoManifoldProblem, y, σ, x)\n\nEvaluate the differential proximal map of F stored within AbstractPrimalDualManifoldObjective\n\nDoperatornameprox_σF(x)X\n\nwhich can also be computed in place of y.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_dual_prox","page":"Objective","title":"Manopt.get_dual_prox","text":"Y = get_dual_prox(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, n, τ, X)\nget_dual_prox!(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, Y, n, τ, X)\n\nEvaluate the proximal map of g_n^* stored within AbstractPrimalDualManifoldObjective\n\n Y = operatornameprox_τG_n^*(X)\n\nwhich can also be computed in place of Y.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_primal_prox","page":"Objective","title":"Manopt.get_primal_prox","text":"q = get_primal_prox(M::AbstractManifold, p::AbstractPrimalDualManifoldObjective, σ, p)\nget_primal_prox!(M::AbstractManifold, p::AbstractPrimalDualManifoldObjective, q, σ, p)\n\nEvaluate the proximal map of F stored within AbstractPrimalDualManifoldObjective\n\noperatornameprox_σF(x)\n\nwhich can also be computed in place of y.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.linearized_forward_operator","page":"Objective","title":"Manopt.linearized_forward_operator","text":"Y = linearized_forward_operator(M::AbstractManifold, N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, m, X, n)\nlinearized_forward_operator!(M::AbstractManifold, N::AbstractManifold, Y, apdmo::AbstractPrimalDualManifoldObjective, m, X, n)\n\nEvaluate the linearized operator (differential) DΛ(m)X stored within the AbstractPrimalDualManifoldObjective (in place of Y), where n = Λ(m).\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Constrained-objective","page":"Objective","title":"Constrained objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ConstrainedManifoldObjective","category":"page"},{"location":"plans/objective/#Manopt.ConstrainedManifoldObjective","page":"Objective","title":"Manopt.ConstrainedManifoldObjective","text":"ConstrainedManifoldObjective{T<:AbstractEvaluationType, C<:ConstraintType} <: AbstractManifoldObjective{T}\n\nDescribes the constrained objective\n\nbeginaligned\n operatorname*argmin_p mathcalM f(p)\n textsubject to g_i(p)leq0 quad text for all i=1m\n quad h_j(p)=0 quad text for all j=1n\nendaligned\n\nFields\n\nobjective: an AbstractManifoldObjective representing the unconstrained objective, that is containing cost f, the gradient of the cost f and maybe the Hessian.\nequality_constraints: an AbstractManifoldObjective representing the equality constraints\n\nh mathcal M mathbb R^n also possibly containing its gradient and/or Hessian\n\nequality_constraints: an AbstractManifoldObjective representing the equality constraints\n\nh mathcal M mathbb R^n also possibly containing its gradient and/or Hessian\n\nConstructors\n\nConstrainedManifoldObjective(M::AbstractManifold, f, grad_f;\n g=nothing,\n grad_g=nothing,\n h=nothing,\n grad_h=nothing;\n hess_f=nothing,\n hess_g=nothing,\n hess_h=nothing,\n equality_constraints=nothing,\n inequality_constraints=nothing,\n evaluation=AllocatingEvaluation(),\n M = nothing,\n p = isnothing(M) ? nothing : rand(M),\n)\n\nGenerate the constrained objective based on all involved single functions f, grad_f, g, grad_g, h, grad_h, and optionally a Hessian for each of these. With equality_constraints and inequality_constraints you have to provide the dimension of the ranges of h and g, respectively. You can also provide a manifold M and a point p to use one evaluation of the constraints to automatically try to determine these sizes.\n\nConstrainedManifoldObjective(M::AbstractManifold, mho::AbstractManifoldObjective;\n equality_constraints = nothing,\n inequality_constraints = nothing\n)\n\nGenerate the constrained objective either with explicit constraints g and h, and their gradients, or in the form where these are already encapsulated in VectorGradientFunctions.\n\nBoth variants require that at least one of the constraints (and its gradient) is provided. If any of the three parts provides a Hessian, the corresponding object, that is a ManifoldHessianObjective for f or a VectorHessianFunction for g or h, respectively, is created.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"It might be beneficial to use the adapted problem to specify different ranges for the gradients of the constraints","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ConstrainedManoptProblem","category":"page"},{"location":"plans/objective/#Manopt.ConstrainedManoptProblem","page":"Objective","title":"Manopt.ConstrainedManoptProblem","text":"ConstrainedProblem{\n TM <: AbstractManifold,\n O <: AbstractManifoldObjective\n HR<:Union{AbstractPowerRepresentation,Nothing},\n GR<:Union{AbstractPowerRepresentation,Nothing},\n HHR<:Union{AbstractPowerRepresentation,Nothing},\n GHR<:Union{AbstractPowerRepresentation,Nothing},\n} <: AbstractManoptProblem{TM}\n\nA constrained problem might feature different ranges for the (vectors of) gradients of the equality and inequality constraints.\n\nThe ranges are required in a few places to allocate memory and access elements correctly, they work as follows:\n\nAssume the objective is\n\nbeginaligned\n operatorname*argmin_p mathcalM f(p)\n textsubject to g_i(p)leq0 quad text for all i=1m\n quad h_j(p)=0 quad text for all j=1n\nendaligned\n\nthen the gradients can (classically) be considered as vectors of the components gradients, for example bigl(operatornamegrad g_1(p) operatornamegrad g_2(p) operatornamegrad g_m(p) bigr).\n\nIn another interpretation, this can be considered a point on the tangent space at P = (pp) in mathcal M^m, so in the tangent space to the PowerManifold mathcal M^m. The case where this is a NestedPowerRepresentation this agrees with the interpretation from before, but on power manifolds, more efficient representations exist.\n\nTo then access the elements, the range has to be specified. That is what this problem is for.\n\nConstructor\n\nConstrainedManoptProblem(\n M::AbstractManifold,\n co::ConstrainedManifoldObjective;\n range=NestedPowerRepresentation(),\n gradient_equality_range=range,\n gradient_inequality_range=range\n hessian_equality_range=range,\n hessian_inequality_range=range\n)\n\nCreates a constrained Manopt problem specifying an AbstractPowerRepresentation for both the gradient_equality_range and the gradient_inequality_range, respectively.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"as well as the helper functions","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractConstrainedFunctor\nAbstractConstrainedSlackFunctor\nLagrangianCost\nLagrangianGradient\nLagrangianHessian","category":"page"},{"location":"plans/objective/#Manopt.AbstractConstrainedFunctor","page":"Objective","title":"Manopt.AbstractConstrainedFunctor","text":"AbstractConstrainedFunctor{T}\n\nA common supertype for fucntors that model constraint functions.\n\nThis supertype provides access for the fields λ and μ, the dual variables of constraintsnof type T.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.AbstractConstrainedSlackFunctor","page":"Objective","title":"Manopt.AbstractConstrainedSlackFunctor","text":"AbstractConstrainedSlackFunctor{T,R}\n\nA common supertype for fucntors that model constraint functions with slack.\n\nThis supertype additionally provides access for the fields\n\nμ::T the dual for the inequality constraints\ns::T the slack parametyer, and\nβ::R the the barrier parameter\n\nwhich is also of typee T.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.LagrangianCost","page":"Objective","title":"Manopt.LagrangianCost","text":"LagrangianCost{CO,T} <: AbstractConstrainedFunctor{T}\n\nImplement the Lagrangian of a ConstrainedManifoldObjective co.\n\nmathcal L(p μ λ)\n= f(p) + sum_i=1^m μ_ig_i(p) + sum_j=1^n λ_jh_j(p)\n\nFields\n\nco::CO, μ::T, λ::T as mentioned, where T represents a vector type.\n\nConstructor\n\nLagrangianCost(co, μ, λ)\n\nCreate a functor for the Lagrangian with fixed dual variables.\n\nExample\n\nWhen you directly want to evaluate the Lagrangian mathcal L you can also call\n\nLagrangianCost(co, μ, λ)(M,p)\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.LagrangianGradient","page":"Objective","title":"Manopt.LagrangianGradient","text":"LagrangianGradient{CO,T}\n\nThe gradient of the Lagrangian of a ConstrainedManifoldObjective co with respect to the variable p. The formula reads\n\noperatornamegrad_p mathcal L(p μ λ)\n= operatornamegrad f(p) + sum_i=1^m μ_i operatornamegrad g_i(p) + sum_j=1^n λ_j operatornamegrad h_j(p)\n\nFields\n\nco::CO, μ::T, λ::T as mentioned, where T represents a vector type.\n\nConstructor\n\nLagrangianGradient(co, μ, λ)\n\nCreate a functor for the Lagrangian with fixed dual variables.\n\nExample\n\nWhen you directly want to evaluate the gradient of the Lagrangian operatornamegrad_p mathcal L you can also call LagrangianGradient(co, μ, λ)(M,p) or LagrangianGradient(co, μ, λ)(M,X,p) for the in-place variant.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.LagrangianHessian","page":"Objective","title":"Manopt.LagrangianHessian","text":"LagrangianHessian{CO, V, T}\n\nThe Hesian of the Lagrangian of a ConstrainedManifoldObjective co with respect to the variable p. The formula reads\n\noperatornameHess_p mathcal L(p μ λ)X\n= operatornameHess f(p) + sum_i=1^m μ_i operatornameHess g_i(p)X + sum_j=1^n λ_j operatornameHess h_j(p)X\n\nFields\n\nco::CO, μ::T, λ::T as mentioned, where T represents a vector type.\n\nConstructor\n\nLagrangianHessian(co, μ, λ)\n\nCreate a functor for the Lagrangian with fixed dual variables.\n\nExample\n\nWhen you directly want to evaluate the Hessian of the Lagrangian operatornameHess_p mathcal L you can also call LagrangianHessian(co, μ, λ)(M, p, X) or LagrangianHessian(co, μ, λ)(M, Y, p, X) for the in-place variant.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-7","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"equality_constraints_length\ninequality_constraints_length\nget_unconstrained_objective\nget_equality_constraint\nget_inequality_constraint\nget_grad_equality_constraint\nget_grad_inequality_constraint\nget_hess_equality_constraint\nget_hess_inequality_constraint\nis_feasible","category":"page"},{"location":"plans/objective/#Manopt.equality_constraints_length","page":"Objective","title":"Manopt.equality_constraints_length","text":"equality_constraints_length(co::ConstrainedManifoldObjective)\n\nReturn the number of equality constraints of an ConstrainedManifoldObjective. This acts transparently through AbstractDecoratedManifoldObjectives\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.inequality_constraints_length","page":"Objective","title":"Manopt.inequality_constraints_length","text":"inequality_constraints_length(cmo::ConstrainedManifoldObjective)\n\nReturn the number of inequality constraints of an ConstrainedManifoldObjective cmo. This acts transparently through AbstractDecoratedManifoldObjectives\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_unconstrained_objective","page":"Objective","title":"Manopt.get_unconstrained_objective","text":"get_unconstrained_objective(co::ConstrainedManifoldObjective)\n\nReturns the internally stored unconstrained AbstractManifoldObjective within the ConstrainedManifoldObjective.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_equality_constraint","page":"Objective","title":"Manopt.get_equality_constraint","text":"get_equality_constraint(amp::AbstractManoptProblem, p, j=:)\nget_equality_constraint(M::AbstractManifold, objective, p, j=:)\n\nEvaluate equality constraints of a ConstrainedManifoldObjective objective at point p and indices j (by default : which corresponds to all indices).\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_inequality_constraint","page":"Objective","title":"Manopt.get_inequality_constraint","text":"get_inequality_constraint(amp::AbstractManoptProblem, p, j=:)\nget_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())\n\nEvaluate inequality constraints of a ConstrainedManifoldObjective objective at point p and indices j (by default : which corresponds to all indices).\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_grad_equality_constraint","page":"Objective","title":"Manopt.get_grad_equality_constraint","text":"get_grad_equality_constraint(amp::AbstractManoptProblem, p, j)\nget_grad_equality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())\nget_grad_equality_constraint!(amp::AbstractManoptProblem, X, p, j)\nget_grad_equality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())\n\nEvaluate the gradient or gradients of the equality constraint (operatornamegrad h(p))_j or operatornamegrad h_j(p),\n\nSee also the ConstrainedManoptProblem to specify the range of the gradient.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_grad_inequality_constraint","page":"Objective","title":"Manopt.get_grad_inequality_constraint","text":"get_grad_inequality_constraint(amp::AbstractManoptProblem, p, j=:)\nget_grad_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())\nget_grad_inequality_constraint!(amp::AbstractManoptProblem, X, p, j=:)\nget_grad_inequality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())\n\nEvaluate the gradient or gradients of the inequality constraint (operatornamegrad g(p))_j or operatornamegrad g_j(p),\n\nSee also the ConstrainedManoptProblem to specify the range of the gradient.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_hess_equality_constraint","page":"Objective","title":"Manopt.get_hess_equality_constraint","text":"get_hess_equality_constraint(amp::AbstractManoptProblem, p, j=:)\nget_hess_equality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())\nget_hess_equality_constraint!(amp::AbstractManoptProblem, X, p, j=:)\nget_hess_equality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())\n\nEvaluate the Hessian or Hessians of the equality constraint (operatornameHess h(p))_j or operatornameHess h_j(p),\n\nSee also the ConstrainedManoptProblem to specify the range of the Hessian.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_hess_inequality_constraint","page":"Objective","title":"Manopt.get_hess_inequality_constraint","text":"get_hess_inequality_constraint(amp::AbstractManoptProblem, p, X, j=:)\nget_hess_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())\nget_hess_inequality_constraint!(amp::AbstractManoptProblem, Y, p, j=:)\nget_hess_inequality_constraint!(M::AbstractManifold, Y, co::ConstrainedManifoldObjective, p, X, j=:, range=NestedPowerRepresentation())\n\nEvaluate the Hessian or Hessians of the inequality constraint (operatornameHess g(p)X)_j or operatornameHess g_j(p)X,\n\nSee also the ConstrainedManoptProblem to specify the range of the Hessian.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.is_feasible","page":"Objective","title":"Manopt.is_feasible","text":"is_feasible(M::AbstractManifold, cmo::ConstrainedManifoldObjective, p, kwargs...)\n\nEvaluate whether a boint p on M is feasible with respect to the ConstrainedManifoldObjective cmo. That is for the provided inequality constaints g mathcal M ℝ^m and equality constaints h mathcal M to ℝ^m from within cmo, the point p mathcal M is feasible if\n\ng_i(p) 0 text for all i=1mquadtext and quad h_j(p) = 0 text for all j=1n\n\nKeyword arguments\n\ncheck_point::Bool=true: whether to also verify that `p∈\\mathcal M holds, using is_point\nerror::Symbol=:none: if the point is not feasible, this symbol determines how to report the error.\n:error: throws an error\n:info: displays the error message as an @info\n:none: (default) the function just returns true/false\n:warn: displays the error message as a @warning.\n\nThe keyword error= and all other kwargs... are passed on to is_point if the point is verfied (see check_point).\n\nAll other keywords are passed on to is_poi\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Internal-functions","page":"Objective","title":"Internal functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.get_feasibility_status","category":"page"},{"location":"plans/objective/#Manopt.get_feasibility_status","page":"Objective","title":"Manopt.get_feasibility_status","text":"get_feasibility_status(\n M::AbstractManifold,\n cmo::ConstrainedManifoldObjective,\n g = get_inequality_constraints(M, cmo, p),\n h = get_equality_constraints(M, cmo, p),\n)\n\nGenerate a message about the feasibiliy of p with respect to the ConstrainedManifoldObjective. You can also provide the evaluated vectors for the values of g and h as keyword arguments, in case you had them evaluated before.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Vectorial-objectives","page":"Objective","title":"Vectorial objectives","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.AbstractVectorFunction\nManopt.AbstractVectorGradientFunction\nManopt.VectorGradientFunction\nManopt.VectorHessianFunction","category":"page"},{"location":"plans/objective/#Manopt.AbstractVectorFunction","page":"Objective","title":"Manopt.AbstractVectorFunction","text":"AbstractVectorFunction{E, FT} <: Function\n\nRepresent an abstract vectorial function fmathcal M ℝ^n with an AbstractEvaluationType E and an AbstractVectorialType to specify the format f is implemented as.\n\nRepresentations of f\n\nThere are three different representations of f, which might be beneficial in one or the other situation:\n\nthe FunctionVectorialType,\nthe ComponentVectorialType,\nthe CoordinateVectorialType with respect to a specific basis of the tangent space.\n\nFor the ComponentVectorialType imagine that f could also be written using its component functions,\n\nf(p) = bigl( f_1(p) f_2(p) ldots f_n(p) bigr)^mathrmT\n\nIn this representation f is given as a vector [f1(M,p), f2(M,p), ..., fn(M,p)] of its component functions. An advantage is that the single components can be evaluated and from this representation one even can directly read of the number n. A disadvantage might be, that one has to implement a lot of individual (component) functions.\n\nFor the FunctionVectorialType f is implemented as a single function f(M, p), that returns an AbstractArray. And advantage here is, that this is a single function. A disadvantage might be, that if this is expensive even to compute a single component, all of f has to be evaluated\n\nFor the ComponentVectorialType of f, each of the component functions is a (classical) objective.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.AbstractVectorGradientFunction","page":"Objective","title":"Manopt.AbstractVectorGradientFunction","text":"VectorGradientFunction{E, FT, JT, F, J, I} <: AbstractManifoldObjective{E}\n\nRepresent an abstract vectorial function fmathcal M ℝ^n that provides a (component wise) gradient. The AbstractEvaluationType E indicates the evaluation type, and the AbstractVectorialTypes FT and JT the formats in which the function and the gradient are provided, see AbstractVectorFunction for an explanation.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.VectorGradientFunction","page":"Objective","title":"Manopt.VectorGradientFunction","text":"VectorGradientFunction{E, FT, JT, F, J, I} <: AbstractVectorGradientFunction{E, FT, JT}\n\nRepresent a function fmathcal M ℝ^n including it first derivative, either as a vector of gradients of a Jacobian\n\nAnd hence has a gradient oepratornamegrad f_i(p) T_pmathcal M. Putting these gradients into a vector the same way as the functions, yields a ComponentVectorialType\n\noperatornamegrad f(p) = Bigl( operatornamegrad f_1(p) operatornamegrad f_2(p) operatornamegrad f_n(p) Bigr)^mathrmT\n (T_pmathcal M)^n\n\nAnd advantage here is, that again the single components can be evaluated individually\n\nFields\n\nvalue!!: the cost function f, which can take different formats\ncost_type: indicating / string data for the type of f\njacobian!!: the Jacobian of f\njacobian_type: indicating / storing data for the type of J_f\nparameters: the number n from, the size of the vector f returns.\n\nConstructor\n\nVectorGradientFunction(f, Jf, range_dimension;\n evaluation::AbstractEvaluationType=AllocatingEvaluation(),\n function_type::AbstractVectorialType=FunctionVectorialType(),\n jacobian_type::AbstractVectorialType=FunctionVectorialType(),\n)\n\nCreate a VectorGradientFunction of f and its Jacobian (vector of gradients) Jf, where f maps into the Euclidean space of dimension range_dimension. Their types are specified by the function_type, and jacobian_type, respectively. The Jacobian can further be given as an allocating variant or an in-place variant, specified by the evaluation= keyword.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.VectorHessianFunction","page":"Objective","title":"Manopt.VectorHessianFunction","text":"VectorHessianFunction{E, FT, JT, HT, F, J, H, I} <: AbstractVectorGradientFunction{E, FT, JT}\n\nRepresent a function fmathcal M ℝ^n including it first derivative, either as a vector of gradients of a Jacobian, and the Hessian, as a vector of Hessians of the component functions.\n\nBoth the Jacobian and the Hessian can map into either a sequence of tangent spaces or a single tangent space of the power manifold of lenth n.\n\nFields\n\nvalue!!: the cost function f, which can take different formats\ncost_type: indicating / string data for the type of f\njacobian!!: the Jacobian of f\njacobian_type: indicating / storing data for the type of J_f\nhessians!!: the Hessians of f (in a component wise sense)\nhessian_type: indicating / storing data for the type of H_f\nparameters: the number n from, the size of the vector f returns.\n\nConstructor\n\nVectorGradientFunction(f, Jf, Hess_f, range_dimension;\n evaluation::AbstractEvaluationType=AllocatingEvaluation(),\n function_type::AbstractVectorialType=FunctionVectorialType(),\n jacobian_type::AbstractVectorialType=FunctionVectorialType(),\n hessian_type::AbstractVectorialType=FunctionVectorialType(),\n)\n\nCreate a VectorGradientFunction of f and its Jacobian (vector of gradients) Jf and (vector of) Hessians, where f maps into the Euclidean space of dimension range_dimension. Their types are specified by the function_type, and jacobian_type, and hessian_type, respectively. The Jacobian and Hessian can further be given as an allocating variant or an inplace-variant, specified by the evaluation= keyword.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.AbstractVectorialType\nManopt.CoordinateVectorialType\nManopt.ComponentVectorialType\nManopt.FunctionVectorialType","category":"page"},{"location":"plans/objective/#Manopt.AbstractVectorialType","page":"Objective","title":"Manopt.AbstractVectorialType","text":"AbstractVectorialType\n\nAn abstract type for different representations of a vectorial function f mathcal M mathbb R^m and its (component-wise) gradient/Jacobian\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.CoordinateVectorialType","page":"Objective","title":"Manopt.CoordinateVectorialType","text":"CoordinateVectorialType{B<:AbstractBasis} <: AbstractVectorialType\n\nA type to indicate that gradient of the constraints is implemented as a Jacobian matrix with respect to a certain basis, that is if the constraints are given as g mathcal M ℝ^m with respect to a basis mathcal B of T_pmathcal M, at p mathcal M This can be written as J_g(p) = (c_1^mathrmTc_m^mathrmT)^mathrmT in ℝ^md, that is, every row c_i of this matrix is a set of coefficients such that get_coefficients(M, p, c, B) is the tangent vector oepratornamegrad g_i(p)\n\nfor example g_i(p) ℝ^m or operatornamegrad g_i(p) T_pmathcal M, i=1m.\n\nFields\n\nbasis an AbstractBasis to indicate the default representation.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ComponentVectorialType","page":"Objective","title":"Manopt.ComponentVectorialType","text":"ComponentVectorialType <: AbstractVectorialType\n\nA type to indicate that constraints are implemented as component functions, for example g_i(p) ℝ^m or operatornamegrad g_i(p) T_pmathcal M, i=1m.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.FunctionVectorialType","page":"Objective","title":"Manopt.FunctionVectorialType","text":"FunctionVectorialType <: AbstractVectorialType\n\nA type to indicate that constraints are implemented one whole functions, for example g(p) ℝ^m or operatornamegrad g(p) (T_pmathcal M)^m.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-8","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.get_value\nManopt.get_value_function\nBase.length(::VectorGradientFunction)","category":"page"},{"location":"plans/objective/#Manopt.get_value","page":"Objective","title":"Manopt.get_value","text":"get_value(M::AbstractManifold, vgf::AbstractVectorFunction, p[, i=:])\n\nEvaluate the vector function VectorGradientFunction vgf at p. The range can be used to specify a potential range, but is currently only present for consistency.\n\nThe i can be a linear index, you can provide\n\na single integer\na UnitRange to specify a range to be returned like 1:3\na BitVector specifying a selection\na AbstractVector{<:Integer} to specify indices\n: to return the vector of all gradients, which is also the default\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_value_function","page":"Objective","title":"Manopt.get_value_function","text":"get_value_function(vgf::VectorGradientFunction, recursive=false)\n\nreturn the internally stored function computing get_value.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Base.length-Tuple{VectorGradientFunction}","page":"Objective","title":"Base.length","text":"length(vgf::AbstractVectorFunction)\n\nReturn the length of the vector the function f mathcal M ℝ^n maps into, that is the number n.\n\n\n\n\n\n","category":"method"},{"location":"plans/objective/#Internal-functions-2","page":"Objective","title":"Internal functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt._to_iterable_indices","category":"page"},{"location":"plans/objective/#Manopt._to_iterable_indices","page":"Objective","title":"Manopt._to_iterable_indices","text":"_to_iterable_indices(A::AbstractVector, i)\n\nConvert index i (integer, colon, vector of indices, etc.) for array A into an iterable structure of indices.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Subproblem-objective","page":"Objective","title":"Subproblem objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"This objective can be use when the objective of a sub problem solver still needs access to the (outer/main) objective.","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractManifoldSubObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractManifoldSubObjective","page":"Objective","title":"Manopt.AbstractManifoldSubObjective","text":"AbstractManifoldSubObjective{O<:AbstractManifoldObjective} <: AbstractManifoldObjective\n\nAn abstract type for objectives of sub problems within a solver but still store the original objective internally to generate generic objectives for sub solvers.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-9","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.get_objective_cost\nManopt.get_objective_gradient\nManopt.get_objective_hessian\nManopt.get_objective_preconditioner","category":"page"},{"location":"plans/objective/#Manopt.get_objective_cost","page":"Objective","title":"Manopt.get_objective_cost","text":"get_objective_cost(M, amso::AbstractManifoldSubObjective, p)\n\nEvaluate the cost of the (original) objective stored within the sub objective.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_objective_gradient","page":"Objective","title":"Manopt.get_objective_gradient","text":"X = get_objective_gradient(M, amso::AbstractManifoldSubObjective, p)\nget_objective_gradient!(M, X, amso::AbstractManifoldSubObjective, p)\n\nEvaluate the gradient of the (original) objective stored within the sub objective amso.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_objective_hessian","page":"Objective","title":"Manopt.get_objective_hessian","text":"Y = get_objective_Hessian(M, amso::AbstractManifoldSubObjective, p, X)\nget_objective_Hessian!(M, Y, amso::AbstractManifoldSubObjective, p, X)\n\nEvaluate the Hessian of the (original) objective stored within the sub objective amso.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_objective_preconditioner","page":"Objective","title":"Manopt.get_objective_preconditioner","text":"Y = get_objective_preconditioner(M, amso::AbstractManifoldSubObjective, p, X)\nget_objective_preconditioner(M, Y, amso::AbstractManifoldSubObjective, p, X)\n\nEvaluate the Hessian of the (original) objective stored within the sub objective amso.\n\n\n\n\n\n","category":"function"},{"location":"plans/stopping_criteria/#sec-stopping-criteria","page":"Stopping Criteria","title":"Stopping criteria","text":"","category":"section"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"Stopping criteria are implemented as a functor and inherit from the base type","category":"page"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"StoppingCriterion","category":"page"},{"location":"plans/stopping_criteria/#Manopt.StoppingCriterion","page":"Stopping Criteria","title":"Manopt.StoppingCriterion","text":"StoppingCriterion\n\nAn abstract type for the functors representing stopping criteria, so they are callable structures. The naming Scheme follows functions, see for example StopAfterIteration.\n\nEvery StoppingCriterion has to provide a constructor and its function has to have the interface (p,o,i) where a AbstractManoptProblem as well as AbstractManoptSolverState and the current number of iterations are the arguments and returns a boolean whether to stop or not.\n\nBy default each StoppingCriterion should provide a fields reason to provide details when a criterion is met (and that is empty otherwise).\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"They can also be grouped, which is summarized in the type of a set of criteria","category":"page"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"StoppingCriterionSet","category":"page"},{"location":"plans/stopping_criteria/#Manopt.StoppingCriterionSet","page":"Stopping Criteria","title":"Manopt.StoppingCriterionSet","text":"StoppingCriterionGroup <: StoppingCriterion\n\nAn abstract type for a Stopping Criterion that itself consists of a set of Stopping criteria. In total it acts as a stopping criterion itself. Examples are StopWhenAny and StopWhenAll that can be used to combine stopping criteria.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"The stopping criteria s might have certain internal values/fields it uses to verify against. This is done when calling them as a function s(amp::AbstractManoptProblem, ams::AbstractManoptSolverState), where the AbstractManoptProblem and the AbstractManoptSolverState together represent the current state of the solver. The functor returns either false when the stopping criterion is not fulfilled or true otherwise. One field all criteria should have is the s.at_iteration, to indicate at which iteration the stopping criterion (last) indicated to stop. 0 refers to an indication before starting the algorithm, while any negative number meant the stopping criterion is not (yet) fulfilled. To can access a string giving the reason of stopping see get_reason.","category":"page"},{"location":"plans/stopping_criteria/#Generic-stopping-criteria","page":"Stopping Criteria","title":"Generic stopping criteria","text":"","category":"section"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"The following generic stopping criteria are available. Some require that, for example, the corresponding AbstractManoptSolverState have a field gradient when the criterion should access that.","category":"page"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"Further stopping criteria might be available for individual solvers.","category":"page"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"Modules = [Manopt]\nPages = [\"plans/stopping_criterion.jl\"]\nOrder = [:type]\nFilter = t -> t != StoppingCriterion && t != StoppingCriterionSet","category":"page"},{"location":"plans/stopping_criteria/#Manopt.StopAfter","page":"Stopping Criteria","title":"Manopt.StopAfter","text":"StopAfter <: StoppingCriterion\n\nstore a threshold when to stop looking at the complete runtime. It uses time_ns() to measure the time and you provide a Period as a time limit, for example Minute(15).\n\nFields\n\nthreshold stores the Period after which to stop\nstart stores the starting time when the algorithm is started, that is a call with i=0.\ntime stores the elapsed time\nat_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.\n\nConstructor\n\nStopAfter(t)\n\ninitialize the stopping criterion to a Period t to stop after.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopAfterIteration","page":"Stopping Criteria","title":"Manopt.StopAfterIteration","text":"StopAfterIteration <: StoppingCriterion\n\nA functor for a stopping criterion to stop after a maximal number of iterations.\n\nFields\n\nmax_iterations stores the maximal iteration number where to stop at\nat_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.\n\nConstructor\n\nStopAfterIteration(maxIter)\n\ninitialize the functor to indicate to stop after maxIter iterations.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenAll","page":"Stopping Criteria","title":"Manopt.StopWhenAll","text":"StopWhenAll <: StoppingCriterionSet\n\nstore an array of StoppingCriterion elements and indicates to stop, when all indicate to stop. The reason is given by the concatenation of all reasons.\n\nConstructor\n\nStopWhenAll(c::NTuple{N,StoppingCriterion} where N)\nStopWhenAll(c::StoppingCriterion,...)\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenAny","page":"Stopping Criteria","title":"Manopt.StopWhenAny","text":"StopWhenAny <: StoppingCriterionSet\n\nstore an array of StoppingCriterion elements and indicates to stop, when any single one indicates to stop. The reason is given by the concatenation of all reasons (assuming that all non-indicating return \"\").\n\nConstructor\n\nStopWhenAny(c::NTuple{N,StoppingCriterion} where N)\nStopWhenAny(c::StoppingCriterion...)\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenChangeLess","page":"Stopping Criteria","title":"Manopt.StopWhenChangeLess","text":"StopWhenChangeLess <: StoppingCriterion\n\nstores a threshold when to stop looking at the norm of the change of the optimization variable from within a AbstractManoptSolverState s. That ism by accessing get_iterate(s) and comparing successive iterates. For the storage a StoreStateAction is used.\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\nlast_change::Real: the last change recorded in this stopping criterion\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nstorage::StoreStateAction: a storage to access the previous iterate\nat_iteration::Int: indicate at which iteration this stopping criterion was last active.\ninverse_retraction: An AbstractInverseRetractionMethod that can be passed to approximate the distance by this inverse retraction and a norm on the tangent space. This can be used if neither the distance nor the logarithmic map are availannle on M.\nlast_change: store the last change\nstorage: A StoreStateAction to access the previous iterate.\nthreshold: the threshold for the change to check (run under to stop)\nouter_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance. You can deactivate this, but setting this value to missing.\n\nExample\n\nOn an AbstractPowerManifold like mathcal M = mathcal N^n any point p = (p_1p_n) mathcal M is a vector of length n with of points p_i mathcal N. Then, denoting the outer_norm by r, the distance of two points pq mathcal M is given by\n\n\\mathrm{d}(p,q) = \\Bigl( \\sum_{k=1}^n \\mathrm{d}(p_k,q_k)^r \\Bigr)^{\\frac{1}{r}},\n\nwhere the sum turns into a maximum for the case r=. The outer_norm has no effect on manifolds that do not consist of components.\n\nIf the manifold does not have components, the outer norm is ignored.\n\nConstructor\n\nStopWhenChangeLess(\n M::AbstractManifold,\n threshold::Float64;\n storage::StoreStateAction=StoreStateAction([:Iterate]),\n inverse_retraction_method::IRT=default_inverse_retraction_method(M)\n outer_norm::Union{Missing,Real}=missing\n)\n\ninitialize the stopping criterion to a threshold ε using the StoreStateAction a, which is initialized to just store :Iterate by default. You can also provide an inverseretractionmethod for the distance or a manifold to use its default inverse retraction.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenCostLess","page":"Stopping Criteria","title":"Manopt.StopWhenCostLess","text":"StopWhenCostLess <: StoppingCriterion\n\nstore a threshold when to stop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).\n\nConstructor\n\nStopWhenCostLess(ε)\n\ninitialize the stopping criterion to a threshold ε.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenCostNaN","page":"Stopping Criteria","title":"Manopt.StopWhenCostNaN","text":"StopWhenCostNaN <: StoppingCriterion\n\nstop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).\n\nConstructor\n\nStopWhenCostNaN()\n\ninitialize the stopping criterion to NaN.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenEntryChangeLess","page":"Stopping Criteria","title":"Manopt.StopWhenEntryChangeLess","text":"StopWhenEntryChangeLess\n\nEvaluate whether a certain fields change is less than a certain threshold\n\nFields\n\nfield: a symbol addressing the corresponding field in a certain subtype of AbstractManoptSolverState to track\ndistance: a function (problem, state, v1, v2) -> R that computes the distance between two possible values of the field\nstorage: a StoreStateAction to store the previous value of the field\nthreshold: the threshold to indicate to stop when the distance is below this value\n\nInternal fields\n\nat_iteration: store the iteration at which the stop indication happened\n\nstores a threshold when to stop looking at the norm of the change of the optimization variable from within a AbstractManoptSolverState, i.e get_iterate(o). For the storage a StoreStateAction is used\n\nConstructor\n\nStopWhenEntryChangeLess(\n field::Symbol\n distance,\n threshold;\n storage::StoreStateAction=StoreStateAction([field]),\n)\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenGradientChangeLess","page":"Stopping Criteria","title":"Manopt.StopWhenGradientChangeLess","text":"StopWhenGradientChangeLess <: StoppingCriterion\n\nA stopping criterion based on the change of the gradient.\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\nlast_change::Real: the last change recorded in this stopping criterion\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nstorage::StoreStateAction: a storage to access the previous iterate\nthreshold: the threshold for the change to check (run under to stop)\nouter_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance. You can deactivate this, but setting this value to missing.\n\nExample\n\nOn an AbstractPowerManifold like mathcal M = mathcal N^n any point p = (p_1p_n) mathcal M is a vector of length n with of points p_i mathcal N. Then, denoting the outer_norm by r, the norm of the difference of tangent vectors like the last and current gradien XY mathcal M is given by\n\n\\lVert X-Y \\rVert_{p} = \\Bigl( \\sum_{k=1}^n \\lVert X_k-Y_k \\rVert_{p_k}^r \\Bigr)^{\\frac{1}{r}},\n\nwhere the sum turns into a maximum for the case r=. The outer_norm has no effect on manifols, that do not consist of components.\n\nConstructor\n\nStopWhenGradientChangeLess(\n M::AbstractManifold,\n ε::Float64;\n storage::StoreStateAction=StoreStateAction([:Iterate]),\n vector_transport_method::IRT=default_vector_transport_method(M),\n outer_norm::N=missing\n)\n\nCreate a stopping criterion with threshold ε for the change gradient, that is, this criterion indicates to stop when get_gradient is in (norm of) its change less than ε, where vector_transport_method denotes the vector transport mathcal T used.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenGradientNormLess","page":"Stopping Criteria","title":"Manopt.StopWhenGradientNormLess","text":"StopWhenGradientNormLess <: StoppingCriterion\n\nA stopping criterion based on the current gradient norm.\n\nFields\n\nnorm: a function (M::AbstractManifold, p, X) -> ℝ that computes a norm of the gradient X in the tangent space at p on M. For manifolds with components provide(M::AbstractManifold, p, X, r) -> ℝ`.\nthreshold: the threshold to indicate to stop when the distance is below this value\nouter_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance.\n\nInternal fields\n\nlast_change store the last change\nat_iteration store the iteration at which the stop indication happened\n\nExample\n\nOn an AbstractPowerManifold like mathcal M = mathcal N^n any point p = (p_1p_n) mathcal M is a vector of length n with of points p_i mathcal N. Then, denoting the outer_norm by r, the norm of a tangent vector like the current gradient X mathcal M is given by\n\n\\lVert X \\rVert_{p} = \\Bigl( \\sum_{k=1}^n \\lVert X_k \\rVert_{p_k}^r \\Bigr)^{\\frac{1}{r}},\n\nwhere the sum turns into a maximum for the case r=. The outer_norm has no effect on manifolds that do not consist of components.\n\nIf you pass in your individual norm, this can be deactivated on such manifolds by passing missing to outer_norm.\n\nConstructor\n\nStopWhenGradientNormLess(ε; norm=ManifoldsBase.norm, outer_norm=missing)\n\nCreate a stopping criterion with threshold ε for the gradient, that is, this criterion indicates to stop when get_gradient returns a gradient vector of norm less than ε, where the norm to use can be specified in the norm= keyword.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenIterateNaN","page":"Stopping Criteria","title":"Manopt.StopWhenIterateNaN","text":"StopWhenIterateNaN <: StoppingCriterion\n\nstop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).\n\nConstructor\n\nStopWhenIterateNaN()\n\ninitialize the stopping criterion to NaN.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenSmallerOrEqual","page":"Stopping Criteria","title":"Manopt.StopWhenSmallerOrEqual","text":"StopWhenSmallerOrEqual <: StoppingCriterion\n\nA functor for an stopping criterion, where the algorithm if stopped when a variable is smaller than or equal to its minimum value.\n\nFields\n\nvalue stores the variable which has to fall under a threshold for the algorithm to stop\nminValue stores the threshold where, if the value is smaller or equal to this threshold, the algorithm stops\n\nConstructor\n\nStopWhenSmallerOrEqual(value, minValue)\n\ninitialize the functor to indicate to stop after value is smaller than or equal to minValue.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenStepsizeLess","page":"Stopping Criteria","title":"Manopt.StopWhenStepsizeLess","text":"StopWhenStepsizeLess <: StoppingCriterion\n\nstores a threshold when to stop looking at the last step size determined or found during the last iteration from within a AbstractManoptSolverState.\n\nConstructor\n\nStopWhenStepsizeLess(ε)\n\ninitialize the stopping criterion to a threshold ε.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenSubgradientNormLess","page":"Stopping Criteria","title":"Manopt.StopWhenSubgradientNormLess","text":"StopWhenSubgradientNormLess <: StoppingCriterion\n\nA stopping criterion based on the current subgradient norm.\n\nConstructor\n\nStopWhenSubgradientNormLess(ε::Float64)\n\nCreate a stopping criterion with threshold ε for the subgradient, that is, this criterion indicates to stop when get_subgradient returns a subgradient vector of norm less than ε.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Functions-for-stopping-criteria","page":"Stopping Criteria","title":"Functions for stopping criteria","text":"","category":"section"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"There are a few functions to update, combine, and modify stopping criteria, especially to update internal values even for stopping criteria already being used within an AbstractManoptSolverState structure.","category":"page"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"Modules = [Manopt]\nPages = [\"plans/stopping_criterion.jl\"]\nOrder = [:function]","category":"page"},{"location":"plans/stopping_criteria/#Base.:&-Union{Tuple{T}, Tuple{S}, Tuple{S, T}} where {S<:StoppingCriterion, T<:StoppingCriterion}","page":"Stopping Criteria","title":"Base.:&","text":"&(s1,s2)\ns1 & s2\n\nCombine two StoppingCriterion within an StopWhenAll. If either s1 (or s2) is already an StopWhenAll, then s2 (or s1) is appended to the list of StoppingCriterion within s1 (or s2).\n\nExample\n\na = StopAfterIteration(200) & StopWhenChangeLess(M, 1e-6)\nb = a & StopWhenGradientNormLess(1e-6)\n\nIs the same as\n\na = StopWhenAll(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6))\nb = StopWhenAll(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6), StopWhenGradientNormLess(1e-6))\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Base.:|-Union{Tuple{T}, Tuple{S}, Tuple{S, T}} where {S<:StoppingCriterion, T<:StoppingCriterion}","page":"Stopping Criteria","title":"Base.:|","text":"|(s1,s2)\ns1 | s2\n\nCombine two StoppingCriterion within an StopWhenAny. If either s1 (or s2) is already an StopWhenAny, then s2 (or s1) is appended to the list of StoppingCriterion within s1 (or s2)\n\nExample\n\na = StopAfterIteration(200) | StopWhenChangeLess(M, 1e-6)\nb = a | StopWhenGradientNormLess(1e-6)\n\nIs the same as\n\na = StopWhenAny(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6))\nb = StopWhenAny(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6), StopWhenGradientNormLess(1e-6))\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.get_active_stopping_criteria-Tuple{sCS} where sCS<:StoppingCriterionSet","page":"Stopping Criteria","title":"Manopt.get_active_stopping_criteria","text":"get_active_stopping_criteria(c)\n\nreturns all active stopping criteria, if any, that are within a StoppingCriterion c, and indicated a stop, that is their reason is nonempty. To be precise for a simple stopping criterion, this returns either an empty array if no stop is indicated or the stopping criterion as the only element of an array. For a StoppingCriterionSet all internal (even nested) criteria that indicate to stop are returned.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.get_reason-Tuple{AbstractManoptSolverState}","page":"Stopping Criteria","title":"Manopt.get_reason","text":"get_reason(s::AbstractManoptSolverState)\n\nreturn the current reason stored within the StoppingCriterion from within the AbstractManoptSolverState. This reason is empty (\"\") if the criterion has never been met.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.get_stopping_criteria-Tuple{S} where S<:StoppingCriterionSet","page":"Stopping Criteria","title":"Manopt.get_stopping_criteria","text":"get_stopping_criteria(c)\n\nreturn the array of internally stored StoppingCriterions for a StoppingCriterionSet c.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.indicates_convergence-Tuple{StoppingCriterion}","page":"Stopping Criteria","title":"Manopt.indicates_convergence","text":"indicates_convergence(c::StoppingCriterion)\n\nReturn whether (true) or not (false) a StoppingCriterion does always mean that, when it indicates to stop, the solver has converged to a minimizer or critical point.\n\nNote that this is independent of the actual state of the stopping criterion, whether some of them indicate to stop, but a purely type-based, static decision.\n\nExamples\n\nWith s1=StopAfterIteration(20) and s2=StopWhenGradientNormLess(1e-7) the indicator yields\n\nindicates_convergence(s1) is false\nindicates_convergence(s2) is true\nindicates_convergence(s1 | s2) is false, since this might also stop after 20 iterations\nindicates_convergence(s1 & s2) is true, since s2 is fulfilled if this stops.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopAfter, Val{:MaxTime}, Dates.Period}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopAfter, :MaxTime, v::Period)\n\nUpdate the time period after which an algorithm shall stop.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopAfterIteration, Val{:MaxIteration}, Int64}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopAfterIteration, :;MaxIteration, v::Int)\n\nUpdate the number of iterations after which the algorithm should stop.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenChangeLess, Val{:MinIterateChange}, Any}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenChangeLess, :MinIterateChange, v::Int)\n\nUpdate the minimal change below which an algorithm shall stop.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenCostLess, Val{:MinCost}, Any}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenCostLess, :MinCost, v)\n\nUpdate the minimal cost below which the algorithm shall stop\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenEntryChangeLess, Val{:Threshold}, Any}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenEntryChangeLess, :Threshold, v)\n\nUpdate the minimal cost below which the algorithm shall stop\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenGradientChangeLess, Val{:MinGradientChange}, Any}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenGradientChangeLess, :MinGradientChange, v)\n\nUpdate the minimal change below which an algorithm shall stop.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenGradientNormLess, Val{:MinGradNorm}, Float64}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenGradientNormLess, :MinGradNorm, v::Float64)\n\nUpdate the minimal gradient norm when an algorithm shall stop\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenStepsizeLess, Val{:MinStepsize}, Any}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenStepsizeLess, :MinStepsize, v)\n\nUpdate the minimal step size below which the algorithm shall stop\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenSubgradientNormLess, Val{:MinSubgradNorm}, Float64}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenSubgradientNormLess, :MinSubgradNorm, v::Float64)\n\nUpdate the minimal subgradient norm when an algorithm shall stop\n\n\n\n\n\n","category":"method"},{"location":"tutorials/HowToRecord/#How-to-record-data-during-the-iterations","page":"Record values","title":"How to record data during the iterations","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"The recording and debugging features make it possible to record nearly any data during the iterations. This tutorial illustrates how to:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"record one value during the iterations;\nrecord multiple values during the iterations and access them afterwards;\nrecord within a subsolver\ndefine an own RecordAction to perform individual recordings.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Several predefined recordings exist, for example RecordCost or RecordGradient, if the problem the solver uses provides a gradient. For fields of the State the recording can also be done RecordEntry. For other recordings, for example more advanced computations before storing a value, an own RecordAction can be defined.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We illustrate these using the gradient descent from the Get started: optimize! tutorial.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Here the focus is put on ways to investigate the behaviour during iterations by using Recording techniques.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Let’s first load the necessary packages.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"using Manopt, Manifolds, Random, ManifoldDiff, LinearAlgebra\nusing ManifoldDiff: grad_distance\nRandom.seed!(42);","category":"page"},{"location":"tutorials/HowToRecord/#The-objective","page":"Record values","title":"The objective","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We generate data and define our cost and gradient:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Random.seed!(42)\nm = 30\nM = Sphere(m)\nn = 800\nσ = π / 8\nx = zeros(Float64, m + 1)\nx[2] = 1.0\ndata = [exp(M, x, σ * rand(M; vector_at=x)) for i in 1:n]\nf(M, p) = sum(1 / (2 * n) * distance.(Ref(M), Ref(p), data) .^ 2)\ngrad_f(M, p) = sum(1 / n * grad_distance.(Ref(M), data, Ref(p)))","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"grad_f (generic function with 1 method)","category":"page"},{"location":"tutorials/HowToRecord/#First-examples","page":"Record values","title":"First examples","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"For the high level interfaces of the solvers, like gradient_descent we have to set return_state to true to obtain the whole solver state and not only the resulting minimizer.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Then we can easily use the record= option to add recorded values. This keyword accepts RecordActions as well as several symbols as shortcuts, for example :Cost to record the cost, or if your options have a field f, :f would record that entry. An overview of the symbols that can be used is given here.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We first just record the cost after every iteration","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"R = gradient_descent(M, f, grad_f, data[1]; record=:Cost, return_state=true)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 58 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-8: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Record\n(Iteration = RecordCost(),)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"From the returned state, we see that the GradientDescentState are encapsulated (decorated) within a RecordSolverState.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"For such a state, one can attach different recorders to some operations, currently to :Start. :Stop, and :Iteration, where :Iteration is the default when using the record= keyword with a RecordAction or a Symbol as we just did. We can access all values recorded during the iterations by calling get_record(R, :Iteation) or since this is the default even shorter","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(R)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"58-element Vector{Float64}:\n 0.6870172325261714\n 0.6239221496686211\n 0.5900244338953802\n 0.569312079535616\n 0.551804825865545\n 0.5429045359832491\n 0.5383847696671529\n 0.5360322830268692\n 0.5348144739486789\n 0.5341773307679919\n 0.5338452512001082\n 0.5336712822308554\n 0.533580331120935\n ⋮\n 0.5334801024530476\n 0.5334801024530282\n 0.5334801024530178\n 0.5334801024530125\n 0.5334801024530096\n 0.5334801024530081\n 0.5334801024530073\n 0.5334801024530066\n 0.5334801024530061\n 0.5334801024530059\n 0.5334801024530059\n 0.5334801024530059","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"To record more than one value, you can pass an array of a mix of symbols and RecordActions which formally introduces RecordGroup. Such a group records a tuple of values in every iteration:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"R2 = gradient_descent(M, f, grad_f, data[1]; record=[:Iteration, :Cost], return_state=true)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 58 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-8: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Record\n(Iteration = RecordGroup([RecordIteration(), RecordCost()]),)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Here, the symbol :Cost is mapped to using the RecordCost action. The same holds for :Iteration obviously records the current iteration number i. To access these you can first extract the group of records (that is where the :Iterations are recorded; note the plural) and then access the :Cost ““”","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record_action(R2, :Iteration)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"RecordGroup([RecordIteration(), RecordCost()])","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Since iteration is the default, we can also omit it here again. To access single recorded values, one can use","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record_action(R2)[:Cost]","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"58-element Vector{Float64}:\n 0.6870172325261714\n 0.6239221496686211\n 0.5900244338953802\n 0.569312079535616\n 0.551804825865545\n 0.5429045359832491\n 0.5383847696671529\n 0.5360322830268692\n 0.5348144739486789\n 0.5341773307679919\n 0.5338452512001082\n 0.5336712822308554\n 0.533580331120935\n ⋮\n 0.5334801024530476\n 0.5334801024530282\n 0.5334801024530178\n 0.5334801024530125\n 0.5334801024530096\n 0.5334801024530081\n 0.5334801024530073\n 0.5334801024530066\n 0.5334801024530061\n 0.5334801024530059\n 0.5334801024530059\n 0.5334801024530059","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"This can be also done by using a the high level interface get_record","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(R2, :Iteration, :Cost)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"58-element Vector{Float64}:\n 0.6870172325261714\n 0.6239221496686211\n 0.5900244338953802\n 0.569312079535616\n 0.551804825865545\n 0.5429045359832491\n 0.5383847696671529\n 0.5360322830268692\n 0.5348144739486789\n 0.5341773307679919\n 0.5338452512001082\n 0.5336712822308554\n 0.533580331120935\n ⋮\n 0.5334801024530476\n 0.5334801024530282\n 0.5334801024530178\n 0.5334801024530125\n 0.5334801024530096\n 0.5334801024530081\n 0.5334801024530073\n 0.5334801024530066\n 0.5334801024530061\n 0.5334801024530059\n 0.5334801024530059\n 0.5334801024530059","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Note that the first symbol again refers to the point where we record (not to the thing we record). We can also pass a tuple as second argument to have our own order within the tuples returned. Switching the order of recorded cost and Iteration can be done using ““”","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(R2, :Iteration, (:Iteration, :Cost))","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"58-element Vector{Tuple{Int64, Float64}}:\n (1, 0.6870172325261714)\n (2, 0.6239221496686211)\n (3, 0.5900244338953802)\n (4, 0.569312079535616)\n (5, 0.551804825865545)\n (6, 0.5429045359832491)\n (7, 0.5383847696671529)\n (8, 0.5360322830268692)\n (9, 0.5348144739486789)\n (10, 0.5341773307679919)\n (11, 0.5338452512001082)\n (12, 0.5336712822308554)\n (13, 0.533580331120935)\n ⋮\n (47, 0.5334801024530476)\n (48, 0.5334801024530282)\n (49, 0.5334801024530178)\n (50, 0.5334801024530125)\n (51, 0.5334801024530096)\n (52, 0.5334801024530081)\n (53, 0.5334801024530073)\n (54, 0.5334801024530066)\n (55, 0.5334801024530061)\n (56, 0.5334801024530059)\n (57, 0.5334801024530059)\n (58, 0.5334801024530059)","category":"page"},{"location":"tutorials/HowToRecord/#A-more-complex-example","page":"Record values","title":"A more complex example","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"To illustrate a complicated example let’s record:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"the iteration number, cost and gradient field, but only every sixth iteration;\nthe iteration at which we stop.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We first generate the problem and the state, to also illustrate the low-level works when not using the high-level interface gradient_descent.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"p = DefaultManoptProblem(M, ManifoldGradientObjective(f, grad_f))\ns = GradientDescentState(\n M;\n p=copy(data[1]),\n stopping_criterion=StopAfterIteration(200) | StopWhenGradientNormLess(10.0^-9),\n)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-9: not reached\nOverall: not reached\nThis indicates convergence: No","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We now first build a RecordGroup to group the three entries we want to record per iteration. We then put this into a RecordEvery to only record this every sixth iteration","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"rI = RecordEvery(\n RecordGroup([\n RecordIteration() => :Iteration,\n RecordCost() => :Cost,\n RecordEntry(similar(data[1]), :X) => :Gradient,\n ]),\n 6,\n)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"RecordEvery(RecordGroup([RecordIteration(), RecordCost(), RecordEntry(:X)]), 6, true)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"where the notation as a pair with the symbol can be read as “Is accessible by”. The record= keyword with the symbol :Iteration is actually the same as we specified here for the first group entry. For recording the final iteration number","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"sI = RecordIteration()","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"RecordIteration()","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We now combine both into the RecordSolverState decorator. It acts completely the same as any AbstractManoptSolverState but records something in every iteration additionally. This is stored in a dictionary of RecordActions, where :Iteration is the action (here the only every sixth iteration group) and the sI which is executed at stop.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Note that the keyword record= in the high level interface gradient_descent only would fill the :Iteration symbol of said dictionary, but we could also pass pairs like in the following, that is in the form Symbol => RecordAction into that keyword to obtain the same as in","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"r = RecordSolverState(s, Dict(:Iteration => rI, :Stop => sI))","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-9: not reached\nOverall: not reached\nThis indicates convergence: No\n\n## Record\n(Iteration = RecordEvery(RecordGroup([RecordIteration(), RecordCost(), RecordEntry(:X)]), 6, true), Stop = RecordIteration())","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We now call the solver","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"res = solve!(p, r)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 63 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-9: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Record\n(Iteration = RecordEvery(RecordGroup([RecordIteration(), RecordCost(), RecordEntry(:X)]), 6, true), Stop = RecordIteration())","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"And we can look at the recorded value at :Stop to see how many iterations were performed","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(res, :Stop)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"1-element Vector{Int64}:\n 63","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"and the other values during the iterations are","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(res, :Iteration, (:Iteration, :Cost))","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"10-element Vector{Tuple{Int64, Float64}}:\n (6, 0.5429045359832491)\n (12, 0.5336712822308554)\n (18, 0.5334840986243338)\n (24, 0.5334801877032023)\n (30, 0.5334801043129838)\n (36, 0.5334801024945817)\n (42, 0.5334801024539585)\n (48, 0.5334801024530282)\n (54, 0.5334801024530066)\n (60, 0.5334801024530057)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"where the last tuple contains the names from the pairs when we generated the record group. So similarly we can use :Gradient as specified before to access the recorded gradient.","category":"page"},{"location":"tutorials/HowToRecord/#Recording-from-a-Subsolver","page":"Record values","title":"Recording from a Subsolver","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"One can also record from a subsolver. For that we need a problem that actually requires a subsolver. We take the constraint example from the How to print debug tutorial. Maybe read that part for more details on the problem","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"d = 4\nM2 = Sphere(d - 1)\nv0 = project(M2, [ones(2)..., zeros(d - 2)...])\nZ = v0 * v0'\n#Cost and gradient\nf2(M, p) = -tr(transpose(p) * Z * p) / 2\ngrad_f2(M, p) = project(M, p, -transpose.(Z) * p / 2 - Z * p / 2)\n# Constraints\ng(M, p) = -p # now p ≥ 0\nmI = -Matrix{Float64}(I, d, d)\n# Vector of gradients of the constraint components\ngrad_g(M, p) = [project(M, p, mI[:, i]) for i in 1:d]\np0 = project(M2, [ones(2)..., zeros(d - 3)..., 0.1])","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We directly start with recording the subsolvers Iteration. We can specify what to record in the subsolver using the sub_kwargs keyword argument with a Symbol => value pair. Here we specify to record the iteration and the cost in every subsolvers step.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Furthermore, we have to “collect” this recording after every sub solver run. This is done with the :Subsolver keyword in the main record= keyword.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"s1 = exact_penalty_method(\n M2,\n f2,\n grad_f2,\n p0;\n g = g,\n grad_g = grad_g,\n record = [:Iteration, :Cost, :Subsolver],\n sub_kwargs = [:record => [:Iteration, :Cost]],\n return_state=true,\n);","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Then the first entry of the record contains the iterate, the (main solvers) cost, and the third entry is the recording of the subsolver.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(s1)[1]","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"(1, -0.4733019623455375, [(1, -0.4288382393589549), (2, -0.43669534259556914), (3, -0.4374036673499917), (4, -0.43744087180862923)])","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"When adding a number to not record on every iteration, the :Subsolver keyword of course still also only “copies over” the subsolver recordings when active. But one could avoid allocations on the other runs. This is done, by specifying the sub solver as :WhenActive","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"s2 = exact_penalty_method(\n M2,\n f2,\n grad_f2,\n p0;\n g = g,\n grad_g = grad_g,\n record = [:Iteration, :Cost, :Subsolver, 25],\n sub_kwargs = [:record => [:Iteration, :Cost, :WhenActive]],\n return_state=true,\n);","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Then","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(s2)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"4-element Vector{Tuple{Int64, Float64, Vector{Tuple{Int64, Float64}}}}:\n (25, -0.4994494108530985, [(1, -0.4991469152295235)])\n (50, -0.49999564261147317, [(1, -0.49999366842932896)])\n (75, -0.49999997420136083, [(1, -0.4999999614701454)])\n (100, -0.4999999998337046, [(1, -0.49999999981081666)])","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Finally, instead of recording iterations, we can also specify to record the stopping criterion and final cost by adding that to :Stop of the sub solvers record. Then we can specify, as usual in a tuple, that the :Subsolver should record :Stop (by default it takes over :Iteration)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"s3 = exact_penalty_method(\n M2,\n f2,\n grad_f2,\n p0;\n g = g,\n grad_g = grad_g,\n record = [:Iteration, :Cost, (:Subsolver, :Stop), 25],\n sub_kwargs = [:record => [:Stop => [:Stop, :Cost]]],\n return_state=true,\n);","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Then the following displays also the reasons why each of the recorded subsolvers stopped and the corresponding cost","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(s3)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"4-element Vector{Tuple{Int64, Float64, Vector{Tuple{String, Float64}}}}:\n (25, -0.4994494108530985, [(\"The algorithm reached approximately critical point after 1 iterations; the gradient norm (0.00031307624887101047) is less than 0.001.\\n\", -0.4991469152295235)])\n (50, -0.49999564261147317, [(\"The algorithm reached approximately critical point after 1 iterations; the gradient norm (0.0009767910400237622) is less than 0.001.\\n\", -0.49999366842932896)])\n (75, -0.49999997420136083, [(\"The algorithm reached approximately critical point after 1 iterations; the gradient norm (0.0002239629119661262) is less than 0.001.\\n\", -0.4999999614701454)])\n (100, -0.4999999998337046, [(\"The algorithm reached approximately critical point after 1 iterations; the gradient norm (3.8129640908105967e-6) is less than 0.001.\\n\", -0.49999999981081666)])","category":"page"},{"location":"tutorials/HowToRecord/#Writing-an-own-[RecordAction](https://manoptjl.org/stable/plans/record/#Manopt.RecordAction)s","page":"Record values","title":"Writing an own RecordActions","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Let’s investigate where we want to count the number of function evaluations, again just to illustrate, since for the gradient this is just one evaluation per iteration. We first define a cost, that counts its own calls.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"mutable struct MyCost{T}\n data::T\n count::Int\nend\nMyCost(data::T) where {T} = MyCost{T}(data, 0)\nfunction (c::MyCost)(M, x)\n c.count += 1\n return sum(1 / (2 * length(c.data)) * distance.(Ref(M), Ref(x), c.data) .^ 2)\nend","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"and we define an own, new RecordAction, which is a functor, that is a struct that is also a function. The function we have to implement is similar to a single solver step in signature, since it might get called every iteration:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"mutable struct RecordCount <: RecordAction\n recorded_values::Vector{Int}\n RecordCount() = new(Vector{Int}())\nend\nfunction (r::RecordCount)(p::AbstractManoptProblem, ::AbstractManoptSolverState, i)\n if i > 0\n push!(r.recorded_values, Manopt.get_cost_function(get_objective(p)).count)\n elseif i < 0 # reset if negative\n r.recorded_values = Vector{Int}()\n end\nend","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Now we can initialize the new cost and call the gradient descent. Note that this illustrates also the last use case since you can pass symbol-action pairs into the record=array.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"f3 = MyCost(data)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Now for the plain gradient descent, we have to modify the step (to a constant stepsize) and remove the default debug verification whether the cost increases (setting debug to []). We also only look at the first 20 iterations to keep this example small in recorded values. We call","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"R3 = gradient_descent(\n M,\n f3,\n grad_f,\n data[1];\n record=[:Iteration => [\n :Iteration,\n RecordCount() => :Count,\n :Cost],\n ],\n stepsize = ConstantLength(1.0),\n stopping_criterion=StopAfterIteration(20),\n debug=[],\n return_state=true,\n)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 20 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nConstantLength(1.0; type=:relative)\n\n## Stopping criterion\n\nMax Iteration 20: reached\nThis indicates convergence: No\n\n## Record\n(Iteration = RecordGroup([RecordIteration(), RecordCount([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]), RecordCost()]),)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"For :Cost we already learned how to access them, the => :Count introduces an action to obtain the :Count symbol as its access. We can again access the whole sets of records","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(R3)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"20-element Vector{Tuple{Int64, Int64, Float64}}:\n (1, 1, 0.5823814423113639)\n (2, 2, 0.540804980234004)\n (3, 3, 0.5345550944722898)\n (4, 4, 0.5336375289938887)\n (5, 5, 0.5335031591890169)\n (6, 6, 0.5334834802310252)\n (7, 7, 0.5334805973984544)\n (8, 8, 0.5334801749902928)\n (9, 9, 0.5334801130855078)\n (10, 10, 0.5334801040117543)\n (11, 11, 0.5334801026815558)\n (12, 12, 0.5334801024865219)\n (13, 13, 0.5334801024579218)\n (14, 14, 0.5334801024537273)\n (15, 15, 0.5334801024531121)\n (16, 16, 0.5334801024530218)\n (17, 17, 0.5334801024530087)\n (18, 18, 0.5334801024530067)\n (19, 19, 0.5334801024530065)\n (20, 20, 0.5334801024530064)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"this is equivalent to calling R[:Iteration]. Note that since we introduced :Count we can also access a single recorded value using","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"R3[:Iteration, :Count]","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"20-element Vector{Int64}:\n 1\n 2\n 3\n 4\n 5\n 6\n 7\n 8\n 9\n 10\n 11\n 12\n 13\n 14\n 15\n 16\n 17\n 18\n 19\n 20","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"and we see that the cost function is called once per iteration.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"If we use this counting cost and run the default gradient descent with Armijo line search, we can infer how many Armijo line search backtracks are preformed:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"f4 = MyCost(data)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"MyCost{Vector{Vector{Float64}}}([[-0.054658825167894595, -0.5592077846510423, -0.04738273828111257, -0.04682080720921302, 0.12279468849667038, 0.07171438895366239, -0.12930045409417057, -0.22102081626380404, -0.31805333254577767, 0.0065859500152017645 … -0.21999168261518043, 0.19570142227077295, 0.340909965798364, -0.0310802190082894, -0.04674431076254687, -0.006088297671169996, 0.01576037011323387, -0.14523596850249543, 0.14526158060820338, 0.1972125856685378], [-0.08192376929745249, -0.5097715132187676, -0.008339904915541005, 0.07289741328038676, 0.11422036270613797, -0.11546739299835748, 0.2296996932628472, 0.1490467170835958, -0.11124820565850364, -0.11790721606521781 … -0.16421249630470344, -0.2450575844467715, -0.07570080850379841, -0.07426218324072491, -0.026520181327346338, 0.11555341205250205, -0.0292955762365121, -0.09012096853677576, -0.23470556634911574, -0.026214242996704013], [-0.22951484264859257, -0.6083825348640186, 0.14273766477054015, -0.11947823367023377, 0.05984293499234536, 0.058820835498203126, 0.07577331705863266, 0.1632847202946857, 0.20244385489915745, 0.04389826920203656 … 0.3222365119325929, 0.009728730325524067, -0.12094785371632395, -0.36322323926212824, -0.0689253407939657, 0.23356953371702974, 0.23489531397909744, 0.078303336494718, -0.14272984135578806, 0.07844539956202407], [-0.0012588500237817606, -0.29958740415089763, 0.036738459489123514, 0.20567651907595125, -0.1131046432541904, -0.06032435985370224, 0.3366633723165895, -0.1694687746143405, -0.001987171245125281, 0.04933779858684409 … -0.2399584473006256, 0.19889267065775063, 0.22468755918787048, 0.1780090580180643, 0.023703860700539356, -0.10212737517121755, 0.03807004103115319, -0.20569120952458983, -0.03257704254233959, 0.06925473452536687], [-0.035534309946938375, -0.06645560787329002, 0.14823972268208874, -0.23913346587232426, 0.038347027875883496, 0.10453333143286662, 0.050933995140290705, -0.12319549375687473, 0.12956684644537844, -0.23540367869989412 … -0.41471772859912864, -0.1418984610380257, 0.0038321446836859334, 0.23655566917750157, -0.17500681300994742, -0.039189751036839374, -0.08687860620942896, -0.11509948162959047, 0.11378233994840942, 0.38739450723013735], [-0.3122539912469438, -0.3101935557860296, 0.1733113629107006, 0.08968593616209351, -0.1836344261367962, -0.06480023695256802, 0.18165070013886545, 0.19618275767992124, -0.07956460275570058, 0.0325997354656551 … 0.2845492418767769, 0.17406455870721682, -0.053101230371568706, -0.1382082812981627, 0.005830071475508364, 0.16739264037923055, 0.034365814374995335, 0.09107702398753297, -0.1877250428700409, 0.05116494897806923], [-0.04159442361185588, -0.7768029783272633, 0.06303616666722486, 0.08070518925253539, -0.07396265237309446, -0.06008109299719321, 0.07977141629715745, 0.019511027129056415, 0.08629917589924847, -0.11156298867318722 … 0.0792587504128044, -0.016444383900170008, -0.181746064577005, -0.01888129512990984, -0.13523922089388968, 0.11358102175659832, 0.07929049608459493, 0.1689565359083833, 0.07673657951723721, -0.1128480905648813], [-0.21221814304651335, -0.5031823821503253, 0.010326342133992458, -0.12438192100961257, 0.04004758695231872, 0.2280527500843805, -0.2096243232022162, -0.16564828762420294, -0.28325749481138984, 0.17033534605245823 … -0.13599096505924074, 0.28437770540525625, 0.08424426798544583, -0.1266207606984139, 0.04917635557603396, -0.00012608938533809706, -0.04283220254770056, -0.08771365647566572, 0.14750169103093985, 0.11601120086036351], [0.10683290707435536, -0.17680836277740156, 0.23767458301899405, 0.12011180867097299, -0.029404774462600154, 0.11522028383799933, -0.3318174480974519, -0.17859266746938374, 0.04352373642537759, 0.2530382802667988 … 0.08879861736692073, -0.004412506987801729, 0.19786810509925895, -0.1397104682727044, 0.09482328498485094, 0.05108149065160893, -0.14578343506951633, 0.3167479772660438, 0.10422673169182732, 0.21573150015891313], [-0.024895624707466164, -0.7473912016432697, -0.1392537238944721, -0.14948896791465557, -0.09765393283580377, 0.04413059403279867, -0.13865379004720355, -0.071032040283992, 0.15604054722246585, -0.10744260463413555 … -0.14748067081342833, -0.14743635071251024, 0.0643591937981352, 0.16138827697852615, -0.12656652133603935, -0.06463635704869083, 0.14329582429103488, -0.01113113793821713, 0.29295387893749997, 0.06774523575259782] … [0.011874845316569967, -0.6910596618389588, 0.21275741439477827, -0.014042545524367437, -0.07883613103495014, -0.0021900966696246776, -0.033836430464220496, 0.2925813113264835, -0.04718187201980008, 0.03949680289730036 … 0.0867736586603294, 0.0404682510051544, -0.24779813848587257, -0.28631514602877145, -0.07211767532456789, -0.15072898498180473, 0.017855923621826746, -0.09795357710255254, -0.14755229203084924, 0.1305005778855436], [0.013457629515450426, -0.3750353654626534, 0.12349883726772073, 0.3521803555005319, 0.2475921439420274, 0.006088649842999206, 0.31203183112392907, -0.036869203979483754, -0.07475746464056504, -0.029297797064479717 … 0.16867368684091563, -0.09450564983271922, -0.0587273302122711, -0.1326667940553803, -0.25530237980444614, 0.37556905374043376, 0.04922612067677609, 0.2605362549983866, -0.21871556587505667, -0.22915883767386164], [0.03295085436260177, -0.971861604433394, 0.034748713521512035, -0.0494065013245799, -0.01767479281403355, 0.0465459739459587, 0.007470494722096038, 0.003227960072276129, 0.0058328596338402365, -0.037591237446692356 … 0.03205152122876297, 0.11331109854742015, 0.03044900529526686, 0.017971704993311105, -0.009329252062960229, -0.02939354719650879, 0.022088835776251863, -0.02546111553658854, -0.0026257225461427582, 0.005702111697172774], [0.06968243992532257, -0.7119502191435176, -0.18136614593117445, -0.1695926215673451, 0.01725015359973796, -0.00694164951158388, -0.34621134287344574, 0.024709256792651912, -0.1632255805999673, -0.2158226433583082 … -0.14153772108081458, -0.11256850346909901, 0.045109821764180706, -0.1162754336222613, -0.13221711766357983, 0.005365354776191061, 0.012750671705879105, -0.018208207549835407, 0.12458753932455452, -0.31843587960340897], [-0.19830349374441875, -0.6086693423968884, 0.08552341811170468, 0.35781519334042255, 0.15790663648524367, 0.02712571268324985, 0.09855601327331667, -0.05840653973421127, -0.09546429767790429, -0.13414717696055448 … -0.0430935804718714, 0.2678584478951765, 0.08780994289014614, 0.01613469379498457, 0.0516187906322884, -0.07383067566731401, -0.1481272738354552, -0.010532317187265649, 0.06555344745952187, -0.1506167863762911], [-0.04347524125197773, -0.6327981074196994, -0.221116680035191, 0.0282207467940456, -0.0855024881522933, 0.12821801740178346, 0.1779499563280024, -0.10247384887512365, 0.0396432464100116, -0.0582580338112627 … 0.1253893207083573, 0.09628202269764763, 0.3165295473947355, -0.14915034201394833, -0.1376727867817772, -0.004153096613530293, 0.09277957650773738, 0.05917264554031624, -0.12230262590034507, -0.19655728521529914], [-0.10173946348675116, -0.6475660153977272, 0.1260284619729566, -0.11933160462857616, -0.04774310633937567, 0.09093928358804217, 0.041662676324043114, -0.1264739543938265, 0.09605293126911392, -0.16790474428001648 … -0.04056684573478108, 0.09351665120940456, 0.15259195558799882, 0.0009949298312580497, 0.09461980828206303, 0.3067004514287283, 0.16129258773733715, -0.18893664085007542, -0.1806865244492513, 0.029319680436405825], [-0.251780954320053, -0.39147463259941456, -0.24359579328578626, 0.30179309757665723, 0.21658893985206484, 0.12304585275893232, 0.28281133086451704, 0.029187615341955325, 0.03616243507191924, 0.029375588909979152 … -0.08071746662465404, -0.2176101928258658, 0.20944684921170825, 0.043033273425352715, -0.040505542460853576, 0.17935596149079197, -0.08454569418519972, 0.0545941597033932, 0.12471741052450099, -0.24314124407858329], [0.28156471341150974, -0.6708572780452595, -0.1410302363738465, -0.08322589397277698, -0.022772599832907418, -0.04447265789199677, -0.016448068022011157, -0.07490911512503738, 0.2778432295769144, -0.10191899088372378 … -0.057272155080983836, 0.12817478092201395, 0.04623814480781884, -0.12184190164369117, 0.1987855635987229, -0.14533603246124993, -0.16334072868597016, -0.052369977381939437, 0.014904286931394959, -0.2440882678882144], [0.12108727495744157, -0.714787344982596, 0.01632521838262752, 0.04437570556908449, -0.041199280304144284, 0.052984488452616, 0.03796520200156107, 0.2791785910964288, 0.11530429924056099, 0.12178223160398421 … -0.07621847481721669, 0.18353870423743013, -0.19066653731436745, -0.09423224997242206, 0.14596847781388494, -0.09747986927777111, 0.16041150122587072, -0.02296513951256738, 0.06786878373578588, 0.15296635978447756]], 0)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"To not get too many entries let’s just look at the first 20 iterations again","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"R4 = gradient_descent(\n M,\n f4,\n grad_f,\n data[1];\n record=[RecordCount(),],\n return_state=true,\n)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 58 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-8: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Record\n(Iteration = RecordCount([25, 29, 33, 37, 40, 44, 48, 52, 56, 60, 64, 68, 72, 76, 80, 84, 88, 92, 96, 100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140, 144, 148, 152, 156, 160, 164, 168, 172, 176, 180, 184, 188, 192, 196, 200, 204, 208, 212, 216, 220, 224, 229, 232, 237, 241, 245, 247, 249]),)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(R4)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"58-element Vector{Int64}:\n 25\n 29\n 33\n 37\n 40\n 44\n 48\n 52\n 56\n 60\n 64\n 68\n 72\n ⋮\n 208\n 212\n 216\n 220\n 224\n 229\n 232\n 237\n 241\n 245\n 247\n 249","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We can see that the number of cost function calls varies, depending on how many line search backtrack steps were required to obtain a good stepsize.","category":"page"},{"location":"tutorials/HowToRecord/#Technical-details","page":"Record values","title":"Technical details","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"2024-11-21T20:37:59.901","category":"page"},{"location":"solvers/ChambollePock/#The-Riemannian-Chambolle-Pock-algorithm","page":"Chambolle-Pock","title":"The Riemannian Chambolle-Pock algorithm","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"The Riemannian Chambolle—Pock is a generalization of the Chambolle—Pock algorithm Chambolle and Pock [CP11] It is also known as primal-dual hybrid gradient (PDHG) or primal-dual proximal splitting (PDPS) algorithm.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"In order to minimize over pmathcal M the cost function consisting of In order to minimize a cost function consisting of","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"F(p) + G(Λ(p))","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"over pmathcal M","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"where Fmathcal M overlineℝ, Gmathcal N overlineℝ, and Λmathcal M mathcal N. If the manifolds mathcal M or mathcal N are not Hadamard, it has to be considered locally only, that is on geodesically convex sets mathcal C subset mathcal M and mathcal D subsetmathcal N such that Λ(mathcal C) subset mathcal D.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"The algorithm is available in four variants: exact versus linearized (see variant) as well as with primal versus dual relaxation (see relax). For more details, see Bergmann, Herzog, Silva Louzeiro, Tenbrinck and Vidal-Núñez [BHS+21]. In the following description is the case of the exact, primal relaxed Riemannian Chambolle—Pock algorithm.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"Given base points mmathcal C, n=Λ(m)mathcal D, initial primal and dual values p^(0) mathcal C, ξ_n^(0) T_n^*mathcal N, and primal and dual step sizes sigma_0, tau_0, relaxation theta_0, as well as acceleration gamma.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"As an initialization, perform bar p^(0) gets p^(0).","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"The algorithms performs the steps k=1 (until a StoppingCriterion is fulfilled with)","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"ξ^(k+1)_n = operatornameprox_tau_k G_n^*Bigl(ξ_n^(k) + tau_k bigl(log_n Λ (bar p^(k))bigr)^flatBigr)\np^(k+1) = operatornameprox_sigma_k Fbiggl(exp_p^(k)Bigl( operatornamePT_p^(k)gets mbigl(-sigma_k DΛ(m)^*ξ_n^(k+1)bigr)^sharpBigr)biggr)\nUpdate\ntheta_k = (1+2gammasigma_k)^-frac12\nsigma_k+1 = sigma_ktheta_k\ntau_k+1 = fractau_ktheta_k\nbar p^(k+1) = exp_p^(k+1)bigl(-theta_k log_p^(k+1) p^(k)bigr)","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"Furthermore you can exchange the exponential map, the logarithmic map, and the parallel transport by a retraction, an inverse retraction, and a vector transport.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"Finally you can also update the base points m and n during the iterations. This introduces a few additional vector transports. The same holds for the case Λ(m^(k))neq n^(k) at some point. All these cases are covered in the algorithm.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"ChambollePock\nChambollePock!","category":"page"},{"location":"solvers/ChambollePock/#Manopt.ChambollePock","page":"Chambolle-Pock","title":"Manopt.ChambollePock","text":"ChambollePock(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)\nChambollePock!(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)\n\nPerform the Riemannian Chambolle—Pock algorithm.\n\nGiven a cost function mathcal Emathcal M ℝ of the form\n\nmathcal f(p) = F(p) + G( Λ(p) )\n\nwhere Fmathcal M ℝ, Gmathcal N ℝ, and Λmathcal M mathcal N.\n\nThis can be done inplace of p.\n\nInput parameters\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nN::AbstractManifold: a Riemannian manifold mathcal M\np: a point on the manifold mathcal M\nX: a tangent vector at the point p on the manifold mathcal M\nm: a point on the manifold mathcal M\nn: a point on the manifold mathcal N\nadjoint_linearized_operator: the adjoint DΛ^* of the linearized operator DΛ T_mmathcal M T_Λ(m)mathcal N)\nprox_F, prox_G_Dual: the proximal maps of F and G^ast_n\n\nnote that depending on the AbstractEvaluationType evaluation the last three parameters as well as the forward operator Λ and the linearized_forward_operator can be given as allocating functions (Manifolds, parameters) -> result or as mutating functions (Manifold, result, parameters) -> result` to spare allocations.\n\nBy default, this performs the exact Riemannian Chambolle Pock algorithm, see the optional parameter DΛ for their linearized variant.\n\nFor more details on the algorithm, see [BHS+21].\n\nKeyword Arguments\n\nacceleration=0.05: acceleration parameter\ndual_stepsize=1/sqrt(8): proximal parameter of the primal prox\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\ninverse_retraction_method_dual=default_inverse_retraction_method(N, typeof(n)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nΛ=missing: the (forward) operator Λ() (required for the :exact variant)\nlinearized_forward_operator=missing: its linearization DΛ() (required for the :linearized variant)\nprimal_stepsize=1/sqrt(8): proximal parameter of the dual prox\nrelaxation=1.: the relaxation parameter γ\nrelax=:primal: whether to relax the primal or dual\nvariant=:exact if Λ is missing, otherwise :linearized: variant to use. Note that this changes the arguments the forward_operator is called with.\nstopping_criterion=StopAfterIteration`(100): a functor indicating that the stopping criterion is fulfilled\nupdate_primal_base=missing: function to update m (identity by default/missing)\nupdate_dual_base=missing: function to update n (identity by default/missing)\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nvector_transport_method_dual=default_vector_transport_method(N, typeof(n)): a vector transport mathcal T_ to use, see the section on vector transports\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.ChambollePock!","page":"Chambolle-Pock","title":"Manopt.ChambollePock!","text":"ChambollePock(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)\nChambollePock!(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)\n\nPerform the Riemannian Chambolle—Pock algorithm.\n\nGiven a cost function mathcal Emathcal M ℝ of the form\n\nmathcal f(p) = F(p) + G( Λ(p) )\n\nwhere Fmathcal M ℝ, Gmathcal N ℝ, and Λmathcal M mathcal N.\n\nThis can be done inplace of p.\n\nInput parameters\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nN::AbstractManifold: a Riemannian manifold mathcal M\np: a point on the manifold mathcal M\nX: a tangent vector at the point p on the manifold mathcal M\nm: a point on the manifold mathcal M\nn: a point on the manifold mathcal N\nadjoint_linearized_operator: the adjoint DΛ^* of the linearized operator DΛ T_mmathcal M T_Λ(m)mathcal N)\nprox_F, prox_G_Dual: the proximal maps of F and G^ast_n\n\nnote that depending on the AbstractEvaluationType evaluation the last three parameters as well as the forward operator Λ and the linearized_forward_operator can be given as allocating functions (Manifolds, parameters) -> result or as mutating functions (Manifold, result, parameters) -> result` to spare allocations.\n\nBy default, this performs the exact Riemannian Chambolle Pock algorithm, see the optional parameter DΛ for their linearized variant.\n\nFor more details on the algorithm, see [BHS+21].\n\nKeyword Arguments\n\nacceleration=0.05: acceleration parameter\ndual_stepsize=1/sqrt(8): proximal parameter of the primal prox\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\ninverse_retraction_method_dual=default_inverse_retraction_method(N, typeof(n)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nΛ=missing: the (forward) operator Λ() (required for the :exact variant)\nlinearized_forward_operator=missing: its linearization DΛ() (required for the :linearized variant)\nprimal_stepsize=1/sqrt(8): proximal parameter of the dual prox\nrelaxation=1.: the relaxation parameter γ\nrelax=:primal: whether to relax the primal or dual\nvariant=:exact if Λ is missing, otherwise :linearized: variant to use. Note that this changes the arguments the forward_operator is called with.\nstopping_criterion=StopAfterIteration`(100): a functor indicating that the stopping criterion is fulfilled\nupdate_primal_base=missing: function to update m (identity by default/missing)\nupdate_dual_base=missing: function to update n (identity by default/missing)\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nvector_transport_method_dual=default_vector_transport_method(N, typeof(n)): a vector transport mathcal T_ to use, see the section on vector transports\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#State","page":"Chambolle-Pock","title":"State","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"ChambollePockState","category":"page"},{"location":"solvers/ChambollePock/#Manopt.ChambollePockState","page":"Chambolle-Pock","title":"Manopt.ChambollePockState","text":"ChambollePockState <: AbstractPrimalDualSolverState\n\nstores all options and variables within a linearized or exact Chambolle Pock.\n\nFields\n\nacceleration::R: acceleration factor\ndual_stepsize::R: proximal parameter of the dual prox\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\ninverse_retraction_method_dual::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nm::P: base point on mathcal M\nn::Q: base point on mathcal N\np::P: an initial point on p^(0) mathcal M\npbar::P: the relaxed iterate used in the next dual update step (when using :primal relaxation)\nprimal_stepsize::R: proximal parameter of the primal prox\nX::T: an initial tangent vector X^(0) T_p^(0)mathcal M\nXbar::T: the relaxed iterate used in the next primal update step (when using :dual relaxation)\nrelaxation::R: relaxation in the primal relaxation step (to compute pbar:\nrelax::Symbol: which variable to relax (:primalor:dual`:\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nvariant: whether to perform an :exact or :linearized Chambolle-Pock\nupdate_primal_base: function (pr, st, k) -> m to update the primal base\nupdate_dual_base: function (pr, st, k) -> n to update the dual base\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nvector_transport_method_dual::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nHere, P is a point type on mathcal M, T its tangent vector type, Q a point type on mathcal N, and R<:Real is a real number type\n\nwhere for the last two the functions a AbstractManoptProblemp, AbstractManoptSolverStateo and the current iterate i are the arguments. If you activate these to be different from the default identity, you have to provide p.Λ for the algorithm to work (which might be missing in the linearized case).\n\nConstructor\n\nChambollePockState(M::AbstractManifold, N::AbstractManifold;\n kwargs...\n) where {P, Q, T, R <: Real}\n\nKeyword arguments\n\nn=[rand](@extref Base.rand-Tuple{AbstractManifold})(N)`\np=rand(M)\nm=rand(M)\nX=zero_vector(M, p)\nacceleration=0.0\ndual_stepsize=1/sqrt(8)\nprimal_stepsize=1/sqrt(8)\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\ninverse_retraction_method_dual=default_inverse_retraction_method(N, typeof(n)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nrelaxation=1.0\nrelax=:primal: relax the primal variable by default\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(300): a functor indicating that the stopping criterion is fulfilled\nvariant=:exact: run the exact Chambolle Pock by default\nupdate_primal_base=missing\nupdate_dual_base=missing\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nvector_transport_method_dual=default_vector_transport_method(N, typeof(n)): a vector transport mathcal T_ to use, see the section on vector transports\n\nif Manifolds.jl is loaded, N is also a keyword argument and set to TangentBundle(M) by default.\n\n\n\n\n\n","category":"type"},{"location":"solvers/ChambollePock/#Useful-terms","page":"Chambolle-Pock","title":"Useful terms","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"primal_residual\ndual_residual","category":"page"},{"location":"solvers/ChambollePock/#Manopt.primal_residual","page":"Chambolle-Pock","title":"Manopt.primal_residual","text":"primal_residual(p, o, x_old, X_old, n_old)\n\nCompute the primal residual at current iterate k given the necessary values x_k-1 X_k-1, and n_k-1 from the previous iterate.\n\nBigllVert\nfrac1σoperatornameretr^-1_x_kx_k-1 -\nV_x_kgets m_kbigl(DΛ^*(m_k)biglV_n_kgets n_k-1X_k-1 - X_k bigr\nBigrrVert\n\nwhere V_gets is the vector transport used in the ChambollePockState\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.dual_residual","page":"Chambolle-Pock","title":"Manopt.dual_residual","text":"dual_residual(p, o, x_old, X_old, n_old)\n\nCompute the dual residual at current iterate k given the necessary values x_k-1 X_k-1, and n_k-1 from the previous iterate. The formula is slightly different depending on the o.variant used:\n\nFor the :linearized it reads\n\nBigllVert\nfrac1τbigl(\nV_n_kgets n_k-1(X_k-1)\n- X_k\nbigr)\n-\nDΛ(m_k)bigl\nV_m_kgets x_koperatornameretr^-1_x_kx_k-1\nbigr\nBigrrVert\n\nand for the :exact variant\n\nBigllVert\nfrac1τ V_n_kgets n_k-1(X_k-1)\n-\noperatornameretr^-1_n_kbigl(\nΛ(operatornameretr_m_k(V_m_kgets x_koperatornameretr^-1_x_kx_k-1))\nbigr)\nBigrrVert\n\nwhere in both cases V_gets is the vector transport used in the ChambollePockState.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Debug","page":"Chambolle-Pock","title":"Debug","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"DebugDualBaseIterate\nDebugDualBaseChange\nDebugPrimalBaseIterate\nDebugPrimalBaseChange\nDebugDualChange\nDebugDualIterate\nDebugDualResidual\nDebugPrimalChange\nDebugPrimalIterate\nDebugPrimalResidual\nDebugPrimalDualResidual","category":"page"},{"location":"solvers/ChambollePock/#Manopt.DebugDualBaseIterate","page":"Chambolle-Pock","title":"Manopt.DebugDualBaseIterate","text":"DebugDualBaseIterate(io::IO=stdout)\n\nPrint the dual base variable by using DebugEntry, see their constructors for detail. This method is further set display o.n.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugDualBaseChange","page":"Chambolle-Pock","title":"Manopt.DebugDualBaseChange","text":"DebugDualChange(; storage=StoreStateAction([:n]), io::IO=stdout)\n\nPrint the change of the dual base variable by using DebugEntryChange, see their constructors for detail, on o.n.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalBaseIterate","page":"Chambolle-Pock","title":"Manopt.DebugPrimalBaseIterate","text":"DebugPrimalBaseIterate()\n\nPrint the primal base variable by using DebugEntry, see their constructors for detail. This method is further set display o.m.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalBaseChange","page":"Chambolle-Pock","title":"Manopt.DebugPrimalBaseChange","text":"DebugPrimalBaseChange(a::StoreStateAction=StoreStateAction([:m]),io::IO=stdout)\n\nPrint the change of the primal base variable by using DebugEntryChange, see their constructors for detail, on o.n.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugDualChange","page":"Chambolle-Pock","title":"Manopt.DebugDualChange","text":"DebugDualChange(opts...)\n\nPrint the change of the dual variable, similar to DebugChange, see their constructors for detail, but with a different calculation of the change, since the dual variable lives in (possibly different) tangent spaces.\n\n\n\n\n\n","category":"type"},{"location":"solvers/ChambollePock/#Manopt.DebugDualIterate","page":"Chambolle-Pock","title":"Manopt.DebugDualIterate","text":"DebugDualIterate(e)\n\nPrint the dual variable by using DebugEntry, see their constructors for detail. This method is further set display o.X.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugDualResidual","page":"Chambolle-Pock","title":"Manopt.DebugDualResidual","text":"DebugDualResidual <: DebugAction\n\nA Debug action to print the dual residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.\n\nConstructor\n\nDebugDualResidual(; kwargs...)\n\nKeyword warguments\n\nio=stdout`: stream to perform the debug to\nformat=\"$prefix%s\": format to print the dual residual, using the\nprefix=\"Dual Residual: \": short form to just set the prefix\nstorage (a new StoreStateAction) to store values for the debug.\n\n\n\n\n\n","category":"type"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalChange","page":"Chambolle-Pock","title":"Manopt.DebugPrimalChange","text":"DebugPrimalChange(opts...)\n\nPrint the change of the primal variable by using DebugChange, see their constructors for detail.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalIterate","page":"Chambolle-Pock","title":"Manopt.DebugPrimalIterate","text":"DebugPrimalIterate(opts...;kwargs...)\n\nPrint the change of the primal variable by using DebugIterate, see their constructors for detail.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalResidual","page":"Chambolle-Pock","title":"Manopt.DebugPrimalResidual","text":"DebugPrimalResidual <: DebugAction\n\nA Debug action to print the primal residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.\n\nConstructor\n\nDebugPrimalResidual(; kwargs...)\n\nKeyword warguments\n\nio=stdout`: stream to perform the debug to\nformat=\"$prefix%s\": format to print the dual residual, using the\nprefix=\"Primal Residual: \": short form to just set the prefix\nstorage (a new StoreStateAction) to store values for the debug.\n\n\n\n\n\n","category":"type"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalDualResidual","page":"Chambolle-Pock","title":"Manopt.DebugPrimalDualResidual","text":"DebugPrimalDualResidual <: DebugAction\n\nA Debug action to print the primal dual residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.\n\nConstructor\n\nDebugPrimalDualResidual()\n\nwith the keywords\n\nKeyword warguments\n\nio=stdout`: stream to perform the debug to\nformat=\"$prefix%s\": format to print the dual residual, using the\nprefix=\"PD Residual: \": short form to just set the prefix\nstorage (a new StoreStateAction) to store values for the debug.\n\n\n\n\n\n","category":"type"},{"location":"solvers/ChambollePock/#Record","page":"Chambolle-Pock","title":"Record","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"RecordDualBaseIterate\nRecordDualBaseChange\nRecordDualChange\nRecordDualIterate\nRecordPrimalBaseIterate\nRecordPrimalBaseChange\nRecordPrimalChange\nRecordPrimalIterate","category":"page"},{"location":"solvers/ChambollePock/#Manopt.RecordDualBaseIterate","page":"Chambolle-Pock","title":"Manopt.RecordDualBaseIterate","text":"RecordDualBaseIterate(n)\n\nCreate an RecordAction that records the dual base point, an RecordEntry of o.n.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordDualBaseChange","page":"Chambolle-Pock","title":"Manopt.RecordDualBaseChange","text":"RecordDualBaseChange(e)\n\nCreate an RecordAction that records the dual base point change, an RecordEntryChange of o.n with distance to the last value to store a value.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordDualChange","page":"Chambolle-Pock","title":"Manopt.RecordDualChange","text":"RecordDualChange()\n\nCreate the action either with a given (shared) Storage, which can be set to the values Tuple, if that is provided).\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordDualIterate","page":"Chambolle-Pock","title":"Manopt.RecordDualIterate","text":"RecordDualIterate(X)\n\nCreate an RecordAction that records the dual base point, an RecordEntry of o.X.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordPrimalBaseIterate","page":"Chambolle-Pock","title":"Manopt.RecordPrimalBaseIterate","text":"RecordPrimalBaseIterate(x)\n\nCreate an RecordAction that records the primal base point, an RecordEntry of o.m.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordPrimalBaseChange","page":"Chambolle-Pock","title":"Manopt.RecordPrimalBaseChange","text":"RecordPrimalBaseChange()\n\nCreate an RecordAction that records the primal base point change, an RecordEntryChange of o.m with distance to the last value to store a value.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordPrimalChange","page":"Chambolle-Pock","title":"Manopt.RecordPrimalChange","text":"RecordPrimalChange(a)\n\nCreate an RecordAction that records the primal value change, RecordChange, to record the change of o.x.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordPrimalIterate","page":"Chambolle-Pock","title":"Manopt.RecordPrimalIterate","text":"RecordDualBaseIterate(x)\n\nCreate an RecordAction that records the dual base point, an RecordIterate of o.x.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Internals","page":"Chambolle-Pock","title":"Internals","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"Manopt.update_prox_parameters!","category":"page"},{"location":"solvers/ChambollePock/#Manopt.update_prox_parameters!","page":"Chambolle-Pock","title":"Manopt.update_prox_parameters!","text":"update_prox_parameters!(o)\n\nupdate the prox parameters as described in Algorithm 2 of [CP11],\n\nθ_n = frac1sqrt1+2γτ_n\nτ_n+1 = θ_nτ_n\nσ_n+1 = fracσ_nθ_n\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#sec-cp-technical-details","page":"Chambolle-Pock","title":"Technical details","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"The ChambollePock solver requires the following functions of a manifold to be available for both the manifold mathcal Mand mathcal N","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= or retraction_method_dual= (for mathcal N) does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= or vector_transport_method_dual= (for mathcal N) does not have to be specified.\nA `copyto!(M, q, p) and copy(M,p) for points.","category":"page"},{"location":"solvers/ChambollePock/#Literature","page":"Chambolle-Pock","title":"Literature","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"R. Bergmann, R. Herzog, M. Silva Louzeiro, D. Tenbrinck and J. Vidal-Núñez. Fenchel duality theory and a primal-dual algorithm on Riemannian manifolds. Foundations of Computational Mathematics 21, 1465–1504 (2021), arXiv:1908.02022.\n\n\n\nA. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40, 120–145 (2011).\n\n\n\n","category":"page"},{"location":"solvers/conjugate_residual/#Conjugate-residual-solver-in-a-Tangent-space","page":"Conjugate Residual","title":"Conjugate residual solver in a Tangent space","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"conjugate_residual\nconjugate_residual!","category":"page"},{"location":"solvers/conjugate_residual/#Manopt.conjugate_residual","page":"Conjugate Residual","title":"Manopt.conjugate_residual","text":"conjugate_residual(TpM::TangentSpace, A, b, X=zero_vector(TpM))\nconjugate_residual(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X=zero_vector(TpM))\nconjugate_residual!(TpM::TangentSpace, A, b, X)\nconjugate_residual!(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)\n\nCompute the solution of mathcal A(p)X + b(p) = 0_p, where\n\nmathcal A is a linear, symmetric operator on T_pmathcal M\nb is a vector field on the manifold\nX T_pmathcal M is a tangent vector\n0_p is the zero vector T_pmathcal M.\n\nThis implementation follows Algorithm 3 in [LY24] and is initalised with X^(0) as the zero vector and\n\nthe initial residual r^(0) = -b(p) - mathcal A(p)X^(0)\nthe initial conjugate direction d^(0) = r^(0)\ninitialize Y^(0) = mathcal A(p)X^(0)\n\nperformed the following steps at iteration k=0 until the stopping_criterion is fulfilled.\n\ncompute a step size α_k = displaystylefrac r^(k) mathcal A(p)r^(k) _p mathcal A(p)d^(k) mathcal A(p)d^(k) _p\ndo a step X^(k+1) = X^(k) + α_kd^(k)\nupdate the residual r^(k+1) = r^(k) + α_k Y^(k)\ncompute Z = mathcal A(p)r^(k+1)\nUpdate the conjugate coefficient β_k = displaystylefrac r^(k+1) mathcal A(p)r^(k+1) _p r^(k) mathcal A(p)r^(k) _p\nUpdate the conjugate direction d^(k+1) = r^(k+1) + β_kd^(k)\nUpdate Y^(k+1) = -Z + β_k Y^(k)\n\nNote that the right hand side of Step 7 is the same as evaluating mathcal Ad^(k+1), but avoids the actual evaluation\n\nInput\n\nTpM the TangentSpace as the domain\nA a symmetric linear operator on the tangent space (M, p, X) -> Y\nb a vector field on the tangent space (M, p) -> X\nX the initial tangent vector\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nstopping_criterion=StopAfterIteration(manifold_dimension(M)|StopWhenRelativeResidualLess(c,1e-8), where c is lVert b rVert_: a functor indicating that the stopping criterion is fulfilled\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/conjugate_residual/#Manopt.conjugate_residual!","page":"Conjugate Residual","title":"Manopt.conjugate_residual!","text":"conjugate_residual(TpM::TangentSpace, A, b, X=zero_vector(TpM))\nconjugate_residual(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X=zero_vector(TpM))\nconjugate_residual!(TpM::TangentSpace, A, b, X)\nconjugate_residual!(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)\n\nCompute the solution of mathcal A(p)X + b(p) = 0_p, where\n\nmathcal A is a linear, symmetric operator on T_pmathcal M\nb is a vector field on the manifold\nX T_pmathcal M is a tangent vector\n0_p is the zero vector T_pmathcal M.\n\nThis implementation follows Algorithm 3 in [LY24] and is initalised with X^(0) as the zero vector and\n\nthe initial residual r^(0) = -b(p) - mathcal A(p)X^(0)\nthe initial conjugate direction d^(0) = r^(0)\ninitialize Y^(0) = mathcal A(p)X^(0)\n\nperformed the following steps at iteration k=0 until the stopping_criterion is fulfilled.\n\ncompute a step size α_k = displaystylefrac r^(k) mathcal A(p)r^(k) _p mathcal A(p)d^(k) mathcal A(p)d^(k) _p\ndo a step X^(k+1) = X^(k) + α_kd^(k)\nupdate the residual r^(k+1) = r^(k) + α_k Y^(k)\ncompute Z = mathcal A(p)r^(k+1)\nUpdate the conjugate coefficient β_k = displaystylefrac r^(k+1) mathcal A(p)r^(k+1) _p r^(k) mathcal A(p)r^(k) _p\nUpdate the conjugate direction d^(k+1) = r^(k+1) + β_kd^(k)\nUpdate Y^(k+1) = -Z + β_k Y^(k)\n\nNote that the right hand side of Step 7 is the same as evaluating mathcal Ad^(k+1), but avoids the actual evaluation\n\nInput\n\nTpM the TangentSpace as the domain\nA a symmetric linear operator on the tangent space (M, p, X) -> Y\nb a vector field on the tangent space (M, p) -> X\nX the initial tangent vector\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nstopping_criterion=StopAfterIteration(manifold_dimension(M)|StopWhenRelativeResidualLess(c,1e-8), where c is lVert b rVert_: a functor indicating that the stopping criterion is fulfilled\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/conjugate_residual/#State","page":"Conjugate Residual","title":"State","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"ConjugateResidualState","category":"page"},{"location":"solvers/conjugate_residual/#Manopt.ConjugateResidualState","page":"Conjugate Residual","title":"Manopt.ConjugateResidualState","text":"ConjugateResidualState{T,R,TStop<:StoppingCriterion} <: AbstractManoptSolverState\n\nA state for the conjugate_residual solver.\n\nFields\n\nX::T: the iterate\nr::T: the residual r = -b(p) - mathcal A(p)X\nd::T: the conjugate direction\nAr::T, Ad::T: storages for mathcal A(p)d, mathcal A(p)r\nrAr::R: internal field for storing r mathcal A(p)r \nα::R: a step length\nβ::R: the conjugate coefficient\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\n\nConstructor\n\nConjugateResidualState(TpM::TangentSpace,slso::SymmetricLinearSystemObjective; kwargs...)\n\nInitialise the state with default values.\n\nKeyword arguments\n\nr=-get_gradient(TpM, slso, X)\nd=copy(TpM, r)\nAr=get_hessian(TpM, slso, X, r)\nAd=copy(TpM, Ar)\nα::R=0.0\nβ::R=0.0\nstopping_criterion=StopAfterIteration(manifold_dimension(M))|StopWhenGradientNormLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nSee also\n\nconjugate_residual\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_residual/#Objective","page":"Conjugate Residual","title":"Objective","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"SymmetricLinearSystemObjective","category":"page"},{"location":"solvers/conjugate_residual/#Manopt.SymmetricLinearSystemObjective","page":"Conjugate Residual","title":"Manopt.SymmetricLinearSystemObjective","text":"SymmetricLinearSystemObjective{E<:AbstractEvaluationType,TA,T} <: AbstractManifoldObjective{E}\n\nModel the objective\n\nf(X) = frac12 lVert mathcal AX + b rVert_p^2qquad X T_pmathcal M\n\ndefined on the tangent space T_pmathcal M at p on the manifold mathcal M.\n\nIn other words this is an objective to solve mathcal A = -b(p) for some linear symmetric operator and a vector function. Note the minus on the right hand side, which makes this objective especially tailored for (iteratively) solving Newton-like equations.\n\nFields\n\nA!!: a symmetric, linear operator on the tangent space\nb!!: a gradient function\n\nwhere A!! can work as an allocating operator (M, p, X) -> Y or an in-place one (M, Y, p, X) -> Y, and similarly b!! can either be a function (M, p) -> X or (M, X, p) -> X. The first variants allocate for the result, the second variants work in-place.\n\nConstructor\n\nSymmetricLinearSystemObjective(A, b; evaluation=AllocatingEvaluation())\n\nGenerate the objective specifying whether the two parts work allocating or in-place.\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_residual/#Additional-stopping-criterion","page":"Conjugate Residual","title":"Additional stopping criterion","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"StopWhenRelativeResidualLess","category":"page"},{"location":"solvers/conjugate_residual/#Manopt.StopWhenRelativeResidualLess","page":"Conjugate Residual","title":"Manopt.StopWhenRelativeResidualLess","text":"StopWhenRelativeResidualLess <: StoppingCriterion\n\nStop when re relative residual in the conjugate_residual is below a certain threshold, i.e.\n\ndisplaystylefraclVert r^(k) rVert_c ε\n\nwhere c = lVert b rVert_ of the initial vector from the vector field in mathcal A(p)X + b(p) = 0_p, from the conjugate_residual\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\nc: the initial norm\nε: the threshold\nnorm_rk: the last computed norm of the residual\n\nConstructor\n\nStopWhenRelativeResidualLess(c, ε; norm_r = 2*c*ε)\n\nInitialise the stopping criterion.\n\nnote: Note\nThe initial norm of the vector field c = lVert b rVert_ that is stored internally is updated on initialisation, that is, if this stopping criterion is called with k<=0.\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_residual/#Internal-functions","page":"Conjugate Residual","title":"Internal functions","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"Manopt.get_b","category":"page"},{"location":"solvers/conjugate_residual/#Manopt.get_b","page":"Conjugate Residual","title":"Manopt.get_b","text":"get_b(TpM::TangentSpace, slso::SymmetricLinearSystemObjective)\n\nevaluate the stored value for computing the right hand side b in mathcal A=-b.\n\n\n\n\n\n","category":"function"},{"location":"solvers/conjugate_residual/#Literature","page":"Conjugate Residual","title":"Literature","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"Z. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.\n\n\n\n","category":"page"},{"location":"tutorials/EmbeddingObjectives/#How-to-define-the-cost-in-the-embedding","page":"Define objectives in the embedding","title":"How to define the cost in the embedding","text":"","category":"section"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Specifying a cost function f mathcal M ℝ on a manifold is usually the model one starts with. Specifying its gradient operatornamegrad f mathcal M Tmathcal M, or more precisely operatornamegradf(p) T_pmathcal M, and eventually a Hessian operatornameHess f T_pmathcal M T_pmathcal M are then necessary to perform optimization. Since these might be challenging to compute, especially when manifolds and differential geometry are not the main area of a user, easier to use methods might be welcome.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"This tutorial discusses how to specify f in the embedding as tilde f, maybe only locally around the manifold, and use the Euclidean gradient tilde f and Hessian ^2 tilde f within Manopt.jl.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"For the theoretical background see convert an Euclidean to an Riemannian Gradient, or Section 4.7 of [Bou23] for the gradient part or Section 5.11 as well as [Ngu23] for the background on converting Hessians.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Here we use the Examples 9.40 and 9.49 of [Bou23] and compare the different methods, one can call the solver, depending on which gradient and/or Hessian one provides.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"using Manifolds, Manopt, ManifoldDiff\nusing LinearAlgebra, Random, Colors, Plots\nRandom.seed!(123)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"We consider the cost function on the Grassmann manifold given by","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"n = 5\nk = 2\nM = Grassmann(5,2)\nA = Symmetric(rand(n,n));","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"f(M, p) = 1 / 2 * tr(p' * A * p)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Note that this implementation is already also a valid implementation / continuation of f into the (lifted) embedding of the Grassmann manifold. In the implementation we can use f for both the Euclidean tilde f and the Grassmann case f.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Its Euclidean gradient nabla f and Hessian nabla^2f are easy to compute as","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"∇f(M, p) = A * p\n∇²f(M,p,X) = A*X","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"On the other hand, from the aforementioned Example 9.49 we can also state the Riemannian gradient and Hessian for comparison as","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"grad_f(M, p) = A * p - p * (p' * A * p)\nHess_f(M, p, X) = A * X - p * p' * A * X - X * p' * A * p","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"We can verify that these are the correct at least numerically by calling the check_gradient","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"check_gradient(M, f, grad_f; plot=true)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"(Image: )","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"and the check_Hessian, which requires a bit more tolerance in its linearity verification","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"check_Hessian(M, f, grad_f, Hess_f; plot=true, error=:error, atol=1e-15)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"(Image: )","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"While they look reasonable here and were already derived, for the general case this derivation might be more complicated.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Luckily there exist two functions in ManifoldDiff.jl that are implemented for several manifolds from Manifolds.jl, namely riemannian_gradient(M, p, eG) that converts a Riemannian gradient eG=nabla tilde f(p) into a the Riemannian one operatornamegrad f(p) and riemannian_Hessian(M, p, eG, eH, X) which converts the Euclidean Hessian eH=nabla^2 tilde f(p)X into operatornameHess f(p)X, where we also require the Euclidean gradient eG=nabla tilde f(p).","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"So we can define","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"grad2_f(M, p) = riemannian_gradient(M, p, ∇f(get_embedding(M), embed(M, p)))","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"where only formally we here call embed(M,p) before passing p to the Euclidean gradient, though here (for the Grassmann manifold with Stiefel representation) the embedding function is the identity.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Similarly for the Hessian, where in our example the embeddings of both the points and tangent vectors are the identity.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"function Hess2_f(M, p, X)\n return riemannian_Hessian(\n M,\n p,\n ∇f(get_embedding(M), embed(M, p)),\n ∇²f(get_embedding(M), embed(M, p), embed(M, p, X)),\n X\n )\nend","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"And we can again verify these numerically,","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"check_gradient(M, f, grad2_f; plot=true)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"(Image: )","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"and","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"check_Hessian(M, f, grad2_f, Hess2_f; plot=true, error=:error, atol=1e-14)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"(Image: )","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"which yields the same result, but we see that the Euclidean conversion might be a bit less stable.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Now if we want to use these in optimization we would require these two functions to call e.g.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"p0 = [1.0 0.0; 0.0 1.0; 0.0 0.0; 0.0 0.0; 0.0 0.0]\nr1 = adaptive_regularization_with_cubics(\n M,\n f,\n grad_f,\n Hess_f,\n p0;\n debug=[:Iteration, :Cost, \"\\n\"],\n return_objective=true,\n return_state=true,\n)\nq1 = get_solver_result(r1)\nr1","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Initial f(x): 0.666814\n# 1 f(x): 0.329582\n# 2 f(x): -0.251913\n# 3 f(x): -0.451908\n# 4 f(x): -0.604753\n# 5 f(x): -0.608791\n# 6 f(x): -0.608797\n# 7 f(x): -0.608797\n\n# Solver state for `Manopt.jl`s Adaptive Regularization with Cubics (ARC)\nAfter 7 iterations\n\n## Parameters\n* η1 | η2 : 0.1 | 0.9\n* γ1 | γ2 : 0.1 | 2.0\n* σ (σmin) : 0.0004082482904638632 (1.0e-10)\n* ρ (ρ_regularization) : 1.0002163851951777 (1000.0)\n* retraction method : ExponentialRetraction()\n* sub solver state :\n | # Solver state for `Manopt.jl`s Lanczos Iteration\n | After 6 iterations\n | \n | ## Parameters\n | * σ : 0.0040824829046386315\n | * # of Lanczos vectors used : 6\n | \n | ## Stopping criteria\n | (a) For the Lanczos Iteration\n | Stop When _one_ of the following are fulfilled:\n | Max Iteration 6: reached\n | First order progress with θ=0.5: not reached\n | Overall: reached\n | (b) For the Newton sub solver\n | Max Iteration 200: not reached\n | This indicates convergence: No\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 40: not reached\n |grad f| < 1.0e-9: reached\n All Lanczos vectors (5) used: not reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Debug\n :Iteration = [ (:Iteration, \"# %-6d\"), (:Cost, \"f(x): %f\"), \"\\n\" ]","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"but if you choose to go for the conversions, then, thinking of the embedding and defining two new functions might be tedious. There is a shortcut for these, which performs the change internally, when necessary by specifying objective_type=:Euclidean.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"r2 = adaptive_regularization_with_cubics(\n M,\n f,\n ∇f,\n ∇²f,\n p0;\n # The one line different to specify our grad/Hess are Eucldiean:\n objective_type=:Euclidean,\n debug=[:Iteration, :Cost, \"\\n\"],\n return_objective=true,\n return_state=true,\n)\nq2 = get_solver_result(r2)\nr2","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Initial f(x): 0.666814\n# 1 f(x): 0.329582\n# 2 f(x): -0.251913\n# 3 f(x): -0.451908\n# 4 f(x): -0.604753\n# 5 f(x): -0.608791\n# 6 f(x): -0.608797\n# 7 f(x): -0.608797\n\n# Solver state for `Manopt.jl`s Adaptive Regularization with Cubics (ARC)\nAfter 7 iterations\n\n## Parameters\n* η1 | η2 : 0.1 | 0.9\n* γ1 | γ2 : 0.1 | 2.0\n* σ (σmin) : 0.0004082482904638632 (1.0e-10)\n* ρ (ρ_regularization) : 1.000409105075989 (1000.0)\n* retraction method : ExponentialRetraction()\n* sub solver state :\n | # Solver state for `Manopt.jl`s Lanczos Iteration\n | After 6 iterations\n | \n | ## Parameters\n | * σ : 0.0040824829046386315\n | * # of Lanczos vectors used : 6\n | \n | ## Stopping criteria\n | (a) For the Lanczos Iteration\n | Stop When _one_ of the following are fulfilled:\n | Max Iteration 6: reached\n | First order progress with θ=0.5: not reached\n | Overall: reached\n | (b) For the Newton sub solver\n | Max Iteration 200: not reached\n | This indicates convergence: No\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 40: not reached\n |grad f| < 1.0e-9: reached\n All Lanczos vectors (5) used: not reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Debug\n :Iteration = [ (:Iteration, \"# %-6d\"), (:Cost, \"f(x): %f\"), \"\\n\" ]","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"which returns the same result, see","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"distance(M, q1, q2)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"5.599906634890012e-16","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"This conversion also works for the gradients of constraints, and is passed down to subsolvers by default when these are created using the Euclidean objective f, nabla f and nabla^2 f.","category":"page"},{"location":"tutorials/EmbeddingObjectives/#Summary","page":"Define objectives in the embedding","title":"Summary","text":"","category":"section"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"If you have the Euclidean gradient (or Hessian) available for a solver call, all you need to provide is objective_type=:Euclidean to convert the objective to a Riemannian one.","category":"page"},{"location":"tutorials/EmbeddingObjectives/#Literature","page":"Define objectives in the embedding","title":"Literature","text":"","category":"section"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"N. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\nD. Nguyen. Operator-Valued Formulas for Riemannian Gradient and Hessian and Families of Tractable Metrics in Riemannian Optimization. Journal of Optimization Theory and Applications 198, 135–164 (2023), arXiv:2009.10159.\n\n\n\n","category":"page"},{"location":"tutorials/EmbeddingObjectives/#Technical-details","page":"Define objectives in the embedding","title":"Technical details","text":"","category":"section"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"2024-11-21T20:37:01.720","category":"page"},{"location":"solvers/alternating_gradient_descent/#solver-alternating-gradient-descent","page":"Alternating Gradient Descent","title":"Alternating gradient descent","text":"","category":"section"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"alternating_gradient_descent\nalternating_gradient_descent!","category":"page"},{"location":"solvers/alternating_gradient_descent/#Manopt.alternating_gradient_descent","page":"Alternating Gradient Descent","title":"Manopt.alternating_gradient_descent","text":"alternating_gradient_descent(M::ProductManifold, f, grad_f, p=rand(M))\nalternating_gradient_descent(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)\nalternating_gradient_descent!(M::ProductManifold, f, grad_f, p)\nalternating_gradient_descent!(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)\n\nperform an alternating gradient descent. This can be done in-place of the start point p\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: a gradient, that can be of two cases\nis a single function returning an ArrayPartition from RecursiveArrayTools.jl or\nis a vector functions each returning a component part of the whole gradient\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom), a per cycle permuted sequence (:Random) or the default :Linear one.\ninner_iterations=5: how many gradient steps to take in a component before alternating to the next\nstopping_criterion=StopAfterIteration(1000)): a functor indicating that the stopping criterion is fulfilled\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size\norder=[1:n]: the initial permutation, where n is the number of gradients in gradF.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nOutput\n\nusually the obtained (approximate) minimizer, see get_solver_return for details\n\nnote: Note\nThe input of each of the (component) gradients is still the whole vector X, just that all other then the ith input component are assumed to be fixed and just the ith components gradient is computed / returned.\n\n\n\n\n\n","category":"function"},{"location":"solvers/alternating_gradient_descent/#Manopt.alternating_gradient_descent!","page":"Alternating Gradient Descent","title":"Manopt.alternating_gradient_descent!","text":"alternating_gradient_descent(M::ProductManifold, f, grad_f, p=rand(M))\nalternating_gradient_descent(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)\nalternating_gradient_descent!(M::ProductManifold, f, grad_f, p)\nalternating_gradient_descent!(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)\n\nperform an alternating gradient descent. This can be done in-place of the start point p\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: a gradient, that can be of two cases\nis a single function returning an ArrayPartition from RecursiveArrayTools.jl or\nis a vector functions each returning a component part of the whole gradient\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom), a per cycle permuted sequence (:Random) or the default :Linear one.\ninner_iterations=5: how many gradient steps to take in a component before alternating to the next\nstopping_criterion=StopAfterIteration(1000)): a functor indicating that the stopping criterion is fulfilled\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size\norder=[1:n]: the initial permutation, where n is the number of gradients in gradF.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nOutput\n\nusually the obtained (approximate) minimizer, see get_solver_return for details\n\nnote: Note\nThe input of each of the (component) gradients is still the whole vector X, just that all other then the ith input component are assumed to be fixed and just the ith components gradient is computed / returned.\n\n\n\n\n\n","category":"function"},{"location":"solvers/alternating_gradient_descent/#State","page":"Alternating Gradient Descent","title":"State","text":"","category":"section"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"AlternatingGradientDescentState","category":"page"},{"location":"solvers/alternating_gradient_descent/#Manopt.AlternatingGradientDescentState","page":"Alternating Gradient Descent","title":"Manopt.AlternatingGradientDescentState","text":"AlternatingGradientDescentState <: AbstractGradientDescentSolverState\n\nStore the fields for an alternating gradient descent algorithm, see also alternating_gradient_descent.\n\nFields\n\ndirection::DirectionUpdateRule\nevaluation_order::Symbol: whether to use a randomly permuted sequence (:FixedRandom), a per cycle newly permuted sequence (:Random) or the default :Linear evaluation order.\ninner_iterations: how many gradient steps to take in a component before alternating to the next\norder: the current permutation\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\nk, ì`: internal counters for the outer and inner iterations, respectively.\n\nConstructors\n\nAlternatingGradientDescentState(M::AbstractManifold; kwargs...)\n\nKeyword arguments\n\ninner_iterations=5\np=rand(M): a point on the manifold mathcal M\norder_type::Symbol=:Linear\norder::Vector{<:Int}=Int[]\nstopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, AlternatingGradientDescentState): a functor inheriting from Stepsize to determine a step size\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nGenerate the options for point p and where inner_iterations, order_type, order, retraction_method, stopping_criterion, and stepsize` are keyword arguments\n\n\n\n\n\n","category":"type"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"Additionally, the options share a DirectionUpdateRule, which chooses the current component, so they can be decorated further; The most inner one should always be the following one though.","category":"page"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"AlternatingGradient\nManopt.AlternatingGradientRule","category":"page"},{"location":"solvers/alternating_gradient_descent/#Manopt.AlternatingGradient","page":"Alternating Gradient Descent","title":"Manopt.AlternatingGradient","text":"AlternatingGradient(; kwargs...)\nAlternatingGradient(M::AbstractManifold; kwargs...)\n\nSpecify that a gradient based method should only update parts of the gradient in order to do a alternating gradient descent.\n\nKeyword arguments\n\ninitial_gradient=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\np=rand(M): a point on the manifold mathcal Mto specify the initial value\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for AlternatingGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"solvers/alternating_gradient_descent/#Manopt.AlternatingGradientRule","page":"Alternating Gradient Descent","title":"Manopt.AlternatingGradientRule","text":"AlternatingGradientRule <: AbstractGradientGroupDirectionRule\n\nCreate a functor (problem, state k) -> (s,X) to evaluate the alternating gradient, that is alternating between the components of the gradient and has an field for partial evaluation of the gradient in-place.\n\nFields\n\nX::T: a tangent vector at the point p on the manifold mathcal M\n\nConstructor\n\nAlternatingGradientRule(M::AbstractManifold; p=rand(M), X=zero_vector(M, p))\n\nInitialize the alternating gradient processor with tangent vector type of X, where both M and p are just help variables.\n\nSee also\n\nalternating_gradient_descent, [AlternatingGradient])@ref)\n\n\n\n\n\n","category":"type"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"which internally uses","category":"page"},{"location":"solvers/alternating_gradient_descent/#sec-agd-technical-details","page":"Alternating Gradient Descent","title":"Technical details","text":"","category":"section"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"The alternating_gradient_descent solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"The problem has to be phrased on a ProductManifold, to be able to","category":"page"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"alternate between parts of the input.","category":"page"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nBy default alternating gradient descent uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#tCG","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint truncated conjugate gradient method","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"Solve the constraint optimization problem on the tangent space","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"beginalign*\noperatorname*argmin_Y T_pmathcalM m_p(Y) = f(p) +\noperatornamegradf(p) Y_p + frac12 mathcalH_pY Y_p\ntextsuch that lVert Y rVert_p Δ\nendalign*","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"on the tangent space T_pmathcal M of a Riemannian manifold mathcal M by using the Steihaug-Toint truncated conjugate-gradient (tCG) method, see [ABG06], Algorithm 2, and [CGT00]. Here mathcal H_p is either the Hessian operatornameHess f(p) or a linear symmetric operator on the tangent space approximating the Hessian.","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Interface","page":"Steihaug-Toint TCG Method","title":"Interface","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":" truncated_conjugate_gradient_descent\n truncated_conjugate_gradient_descent!","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.truncated_conjugate_gradient_descent","page":"Steihaug-Toint TCG Method","title":"Manopt.truncated_conjugate_gradient_descent","text":"truncated_conjugate_gradient_descent(M, f, grad_f, Hess_f, p=rand(M), X=rand(M); vector_at=p);\n kwargs...\n)\ntruncated_conjugate_gradient_descent(M, mho::ManifoldHessianObjective, p=rand(M), X=rand(M; vector_at=p);\n kwargs...\n)\ntruncated_conjugate_gradient_descent(M, trmo::TrustRegionModelObjective, p=rand(M), X=rand(M; vector_at=p);\n kwargs...\n)\n\nsolve the trust-region subproblem\n\nbeginalign*\noperatorname*argmin_Y T_pmathcalM m_p(Y) = f(p) +\noperatornamegradf(p) Y_p + frac12 mathcalH_pY Y_p\ntextsuch that lVert Y rVert_p Δ\nendalign*\n\non a manifold mathcal M by using the Steihaug-Toint truncated conjugate-gradient (tCG) method. This can be done inplace of X.\n\nFor a description of the algorithm and theorems offering convergence guarantees, see [ABG06, CGT00].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\nX: a tangent vector at the point p on the manifold mathcal M\n\nInstead of the three functions, you either provide a ManifoldHessianObjective mho which is then used to build the trust region model, or a TrustRegionModelObjective trmo directly.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\npreconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.\nθ=1.0: the superlinear convergence target rate of 1+θ\nκ=0.1: the linear convergence target rate.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nrandomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(manifold_dimension(base_manifold(Tpm)))|StopWhenResidualIsReducedByFactorOrPower(; κ=κ, θ=θ)|StopWhenTrustRegionIsExceeded()|StopWhenCurvatureIsNegative()|StopWhenModelIncreased(): a functor indicating that the stopping criterion is fulfilled\ntrust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\nSee also\n\ntrust_regions\n\n\n\n\n\n","category":"function"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.truncated_conjugate_gradient_descent!","page":"Steihaug-Toint TCG Method","title":"Manopt.truncated_conjugate_gradient_descent!","text":"truncated_conjugate_gradient_descent(M, f, grad_f, Hess_f, p=rand(M), X=rand(M); vector_at=p);\n kwargs...\n)\ntruncated_conjugate_gradient_descent(M, mho::ManifoldHessianObjective, p=rand(M), X=rand(M; vector_at=p);\n kwargs...\n)\ntruncated_conjugate_gradient_descent(M, trmo::TrustRegionModelObjective, p=rand(M), X=rand(M; vector_at=p);\n kwargs...\n)\n\nsolve the trust-region subproblem\n\nbeginalign*\noperatorname*argmin_Y T_pmathcalM m_p(Y) = f(p) +\noperatornamegradf(p) Y_p + frac12 mathcalH_pY Y_p\ntextsuch that lVert Y rVert_p Δ\nendalign*\n\non a manifold mathcal M by using the Steihaug-Toint truncated conjugate-gradient (tCG) method. This can be done inplace of X.\n\nFor a description of the algorithm and theorems offering convergence guarantees, see [ABG06, CGT00].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\nX: a tangent vector at the point p on the manifold mathcal M\n\nInstead of the three functions, you either provide a ManifoldHessianObjective mho which is then used to build the trust region model, or a TrustRegionModelObjective trmo directly.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\npreconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.\nθ=1.0: the superlinear convergence target rate of 1+θ\nκ=0.1: the linear convergence target rate.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nrandomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(manifold_dimension(base_manifold(Tpm)))|StopWhenResidualIsReducedByFactorOrPower(; κ=κ, θ=θ)|StopWhenTrustRegionIsExceeded()|StopWhenCurvatureIsNegative()|StopWhenModelIncreased(): a functor indicating that the stopping criterion is fulfilled\ntrust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\nSee also\n\ntrust_regions\n\n\n\n\n\n","category":"function"},{"location":"solvers/truncated_conjugate_gradient_descent/#State","page":"Steihaug-Toint TCG Method","title":"State","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"TruncatedConjugateGradientState","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.TruncatedConjugateGradientState","page":"Steihaug-Toint TCG Method","title":"Manopt.TruncatedConjugateGradientState","text":"TruncatedConjugateGradientState <: AbstractHessianSolverState\n\ndescribe the Steihaug-Toint truncated conjugate-gradient method, with\n\nFields\n\nLet T denote the type of a tangent vector and R <: Real.\n\nδ::T: the conjugate gradient search direction\nδHδ, YPδ, δPδ, YPδ: temporary inner products with Hδ and preconditioned inner products.\nHδ, HY: temporary results of the Hessian applied to δ and Y, respectively.\nκ::R: the linear convergence target rate.\nproject!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nrandomize: indicate whether X is initialised to a random vector or not\nresidual::T: the gradient of the model m(Y)\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nθ::R: the superlinear convergence target rate of 1+θ\ntrust_region_radius::R: the trust-region radius\nX::T: the gradient operatornamegradf(p)\nY::T: current iterate tangent vector\nz::T: the preconditioned residual\nz_r::R: inner product of the residual and z\n\nConstructor\n\nTruncatedConjugateGradientState(TpM::TangentSpace, Y=rand(TpM); kwargs...)\n\nInitialise the TCG state.\n\nInput\n\nTpM: a TangentSpace\n\nKeyword arguments\n\nκ=0.1\nproject!::F=copyto!: initialise the numerical stabilisation to just copy the result\nrandomize=false\nθ=1.0\ntrust_region_radius=injectivity_radius(base_manifold(TpM)) / 4\nstopping_criterion=StopAfterIteration(manifold_dimension(base_manifold(Tpm)))|StopWhenResidualIsReducedByFactorOrPower(; κ=κ, θ=θ)|StopWhenTrustRegionIsExceeded()|StopWhenCurvatureIsNegative()|StopWhenModelIncreased(): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#Stopping-criteria","page":"Steihaug-Toint TCG Method","title":"Stopping criteria","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"StopWhenResidualIsReducedByFactorOrPower\nStopWhenTrustRegionIsExceeded\nStopWhenCurvatureIsNegative\nStopWhenModelIncreased\nManopt.set_parameter!(::StopWhenResidualIsReducedByFactorOrPower, ::Val{:ResidualPower}, ::Any)\nManopt.set_parameter!(::StopWhenResidualIsReducedByFactorOrPower, ::Val{:ResidualFactor}, ::Any)","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.StopWhenResidualIsReducedByFactorOrPower","page":"Steihaug-Toint TCG Method","title":"Manopt.StopWhenResidualIsReducedByFactorOrPower","text":"StopWhenResidualIsReducedByFactorOrPower <: StoppingCriterion\n\nA functor for testing if the norm of residual at the current iterate is reduced either by a power of 1+θ or by a factor κ compared to the norm of the initial residual. The criterion hence reads\n\nlVert r_k rVert_p lVert r_0 rVert_p^(0) min bigl( κ lVert r_0 rVert_p^(0) bigr).\n\nFields\n\nκ: the reduction factor\nθ: part of the reduction power\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\n\nConstructor\n\nStopWhenResidualIsReducedByFactorOrPower(; κ=0.1, θ=1.0)\n\nInitialize the StopWhenResidualIsReducedByFactorOrPower functor to indicate to stop after the norm of the current residual is lesser than either the norm of the initial residual to the power of 1+θ or the norm of the initial residual times κ.\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.StopWhenTrustRegionIsExceeded","page":"Steihaug-Toint TCG Method","title":"Manopt.StopWhenTrustRegionIsExceeded","text":"StopWhenTrustRegionIsExceeded <: StoppingCriterion\n\nA functor for testing if the norm of the next iterate in the Steihaug-Toint truncated conjugate gradient method is larger than the trust-region radius θ lVert Y^(k)^* rVert_p^(k) and to end the algorithm when the trust region has been left.\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\ntrr the trust region radius\nYPY the computed norm of Y.\n\nConstructor\n\nStopWhenTrustRegionIsExceeded()\n\ninitialize the StopWhenTrustRegionIsExceeded functor to indicate to stop after the norm of the next iterate is greater than the trust-region radius.\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.StopWhenCurvatureIsNegative","page":"Steihaug-Toint TCG Method","title":"Manopt.StopWhenCurvatureIsNegative","text":"StopWhenCurvatureIsNegative <: StoppingCriterion\n\nA functor for testing if the curvature of the model is negative, δ_k operatornameHess F(p)δ_k_p 0. In this case, the model is not strictly convex, and the stepsize as computed does not yield a reduction of the model.\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\nvalue store the value of the inner product.\nreason: stores a reason of stopping if the stopping criterion has been reached, see get_reason.\n\nConstructor\n\nStopWhenCurvatureIsNegative()\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.StopWhenModelIncreased","page":"Steihaug-Toint TCG Method","title":"Manopt.StopWhenModelIncreased","text":"StopWhenModelIncreased <: StoppingCriterion\n\nA functor for testing if the curvature of the model value increased.\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\nmodel_valuestre the last model value\ninc_model_value store the model value that increased\n\nConstructor\n\nStopWhenModelIncreased()\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.set_parameter!-Tuple{StopWhenResidualIsReducedByFactorOrPower, Val{:ResidualPower}, Any}","page":"Steihaug-Toint TCG Method","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualPower, v)\n\nUpdate the residual Power θ to v.\n\n\n\n\n\n","category":"method"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.set_parameter!-Tuple{StopWhenResidualIsReducedByFactorOrPower, Val{:ResidualFactor}, Any}","page":"Steihaug-Toint TCG Method","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualFactor, v)\n\nUpdate the residual Factor κ to v.\n\n\n\n\n\n","category":"method"},{"location":"solvers/truncated_conjugate_gradient_descent/#Trust-region-model","page":"Steihaug-Toint TCG Method","title":"Trust region model","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"TrustRegionModelObjective","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.TrustRegionModelObjective","page":"Steihaug-Toint TCG Method","title":"Manopt.TrustRegionModelObjective","text":"TrustRegionModelObjective{O<:AbstractManifoldHessianObjective} <: AbstractManifoldSubObjective{O}\n\nA trust region model of the form\n\n m(X) = f(p) + operatornamegrad f(p) X_p + frac1(2 operatornameHess f(p)X X_p\n\nFields\n\nobjective: an AbstractManifoldHessianObjective proving f, its gradient and Hessian\n\nConstructors\n\nTrustRegionModelObjective(objective)\n\nwith either an AbstractManifoldHessianObjective objective or an decorator containing such an objective\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#sec-tr-technical-details","page":"Steihaug-Toint TCG Method","title":"Technical details","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"The trust_regions solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"if you do not provide a trust_region_radius=, then injectivity_radius on the manifold M is required.\nthe norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nA zero_vector!(M,X,p).\nA `copyto!(M, q, p) and copy(M,p) for points.","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Literature","page":"Steihaug-Toint TCG Method","title":"Literature","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).\n\n\n\nA. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).\n\n\n\n","category":"page"},{"location":"solvers/LevenbergMarquardt/#Levenberg-Marquardt","page":"Levenberg–Marquardt","title":"Levenberg-Marquardt","text":"","category":"section"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"LevenbergMarquardt\nLevenbergMarquardt!","category":"page"},{"location":"solvers/LevenbergMarquardt/#Manopt.LevenbergMarquardt","page":"Levenberg–Marquardt","title":"Manopt.LevenbergMarquardt","text":"LevenbergMarquardt(M, f, jacobian_f, p, num_components=-1)\nLevenbergMarquardt!(M, f, jacobian_f, p, num_components=-1; kwargs...)\n\nSolve an optimization problem of the form\n\noperatorname*argmin_p mathcal M frac12 lVert f(p) rVert^2\n\nwhere f mathcal M ℝ^d is a continuously differentiable function, using the Riemannian Levenberg-Marquardt algorithm [Pee93]. The implementation follows Algorithm 1 [AOT22]. The second signature performs the optimization in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal Mℝ^d\njacobian_f: the Jacobian of f. The Jacobian is supposed to accept a keyword argument basis_domain which specifies basis of the tangent space at a given point in which the Jacobian is to be calculated. By default it should be the DefaultOrthonormalBasis.\np: a point on the manifold mathcal M\nnum_components: length of the vector returned by the cost function (d). By default its value is -1 which means that it is determined automatically by calling f one additional time. This is only possible when evaluation is AllocatingEvaluation, for mutating evaluation this value must be explicitly specified.\n\nThese can also be passed as a NonlinearLeastSquaresObjective, then the keyword jacobian_tangent_basis below is ignored\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nη=0.2: scaling factor for the sufficient cost decrease threshold required to accept new proposal points. Allowed range: 0 < η < 1.\nexpect_zero_residual=false: whether or not the algorithm might expect that the value of residual (objective) at minimum is equal to 0.\ndamping_term_min=0.1: initial (and also minimal) value of the damping term\nβ=5.0: parameter by which the damping term is multiplied when the current new point is rejected\ninitial_jacobian_f: the initial Jacobian of the cost function f. By default this is a matrix of size num_components times the manifold dimension of similar type as p.\ninitial_residual_values: the initial residual vector of the cost function f. By default this is a vector of length num_components of similar type as p.\njacobian_tangent_basis: an AbstractBasis specify the basis of the tangent space for jacobian_f.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-12): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/LevenbergMarquardt/#Manopt.LevenbergMarquardt!","page":"Levenberg–Marquardt","title":"Manopt.LevenbergMarquardt!","text":"LevenbergMarquardt(M, f, jacobian_f, p, num_components=-1)\nLevenbergMarquardt!(M, f, jacobian_f, p, num_components=-1; kwargs...)\n\nSolve an optimization problem of the form\n\noperatorname*argmin_p mathcal M frac12 lVert f(p) rVert^2\n\nwhere f mathcal M ℝ^d is a continuously differentiable function, using the Riemannian Levenberg-Marquardt algorithm [Pee93]. The implementation follows Algorithm 1 [AOT22]. The second signature performs the optimization in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal Mℝ^d\njacobian_f: the Jacobian of f. The Jacobian is supposed to accept a keyword argument basis_domain which specifies basis of the tangent space at a given point in which the Jacobian is to be calculated. By default it should be the DefaultOrthonormalBasis.\np: a point on the manifold mathcal M\nnum_components: length of the vector returned by the cost function (d). By default its value is -1 which means that it is determined automatically by calling f one additional time. This is only possible when evaluation is AllocatingEvaluation, for mutating evaluation this value must be explicitly specified.\n\nThese can also be passed as a NonlinearLeastSquaresObjective, then the keyword jacobian_tangent_basis below is ignored\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nη=0.2: scaling factor for the sufficient cost decrease threshold required to accept new proposal points. Allowed range: 0 < η < 1.\nexpect_zero_residual=false: whether or not the algorithm might expect that the value of residual (objective) at minimum is equal to 0.\ndamping_term_min=0.1: initial (and also minimal) value of the damping term\nβ=5.0: parameter by which the damping term is multiplied when the current new point is rejected\ninitial_jacobian_f: the initial Jacobian of the cost function f. By default this is a matrix of size num_components times the manifold dimension of similar type as p.\ninitial_residual_values: the initial residual vector of the cost function f. By default this is a vector of length num_components of similar type as p.\njacobian_tangent_basis: an AbstractBasis specify the basis of the tangent space for jacobian_f.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-12): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/LevenbergMarquardt/#Options","page":"Levenberg–Marquardt","title":"Options","text":"","category":"section"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"LevenbergMarquardtState","category":"page"},{"location":"solvers/LevenbergMarquardt/#Manopt.LevenbergMarquardtState","page":"Levenberg–Marquardt","title":"Manopt.LevenbergMarquardtState","text":"LevenbergMarquardtState{P,T} <: AbstractGradientSolverState\n\nDescribes a Gradient based descent algorithm, with\n\nFields\n\nA default value is given in brackets if a parameter can be left out in initialization.\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nresidual_values: value of F calculated in the solver setup or the previous iteration\nresidual_values_temp: value of F for the current proposal point\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\njacF: the current Jacobian of F\ngradient: the current gradient of F\nstep_vector: the tangent vector at x that is used to move to the next point\nlast_stepsize: length of step_vector\nη: Scaling factor for the sufficient cost decrease threshold required to accept new proposal points. Allowed range: 0 < η < 1.\ndamping_term: current value of the damping term\ndamping_term_min: initial (and also minimal) value of the damping term\nβ: parameter by which the damping term is multiplied when the current new point is rejected\nexpect_zero_residual: if true, the algorithm expects that the value of the residual (objective) at minimum is equal to 0.\n\nConstructor\n\nLevenbergMarquardtState(M, initial_residual_values, initial_jacF; kwargs...)\n\nGenerate the Levenberg-Marquardt solver state.\n\nKeyword arguments\n\nThe following fields are keyword arguments\n\nβ=5.0\ndamping_term_min=0.1\nη=0.2,\nexpect_zero_residual=false\ninitial_gradient=zero_vector(M, p)\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-12)|StopWhenStepsizeLess(1e-12): a functor indicating that the stopping criterion is fulfilled\n\nSee also\n\ngradient_descent, LevenbergMarquardt\n\n\n\n\n\n","category":"type"},{"location":"solvers/LevenbergMarquardt/#sec-lm-technical-details","page":"Levenberg–Marquardt","title":"Technical details","text":"","category":"section"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"The LevenbergMarquardt solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nthe norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nA `copyto!(M, q, p) and copy(M,p) for points.","category":"page"},{"location":"solvers/LevenbergMarquardt/#Literature","page":"Levenberg–Marquardt","title":"Literature","text":"","category":"section"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"S. Adachi, T. Okuno and A. Takeda. Riemannian Levenberg-Marquardt Method with Global and Local Convergence Properties. ArXiv Preprint (2022).\n\n\n\nR. Peeters. On a Riemannian version of the Levenberg-Marquardt algorithm. Serie Research Memoranda 0011 (VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics, 1993).\n\n\n\n","category":"page"},{"location":"solvers/exact_penalty_method/#Exact-penalty-method","page":"Exact Penalty Method","title":"Exact penalty method","text":"","category":"section"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":" exact_penalty_method\n exact_penalty_method!","category":"page"},{"location":"solvers/exact_penalty_method/#Manopt.exact_penalty_method","page":"Exact Penalty Method","title":"Manopt.exact_penalty_method","text":"exact_penalty_method(M, f, grad_f, p=rand(M); kwargs...)\nexact_penalty_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\nexact_penalty_method!(M, f, grad_f, p; kwargs...)\nexact_penalty_method!(M, cmo::ConstrainedManifoldObjective, p; kwargs...)\n\nperform the exact penalty method (EPM) [LB19] The aim of the EPM is to find a solution of the constrained optimisation task\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nwhere M is a Riemannian manifold, and f, g_i_i=1^n and h_j_j=1^m are twice continuously differentiable functions from M to ℝ. For that a weighted L_1-penalty term for the violation of the constraints is added to the objective\n\nf(x) + ρbiggl( sum_i=1^m maxbigl0 g_i(x)bigr + sum_j=1^n vert h_j(x)vertbiggr)\n\nwhere ρ0 is the penalty parameter.\n\nSince this is non-smooth, a SmoothingTechnique with parameter u is applied, see the ExactPenaltyCost.\n\nIn every step k of the exact penalty method, the smoothed objective is then minimized over all p mathcal M. Then, the accuracy tolerance ϵ and the smoothing parameter u are updated by setting\n\nϵ^(k)=maxϵ_min θ_ϵ ϵ^(k-1)\n\nwhere ϵ_min is the lowest value ϵ is allowed to become and θ_ϵ (01) is constant scaling factor, and\n\nu^(k) = max u_min theta_u u^(k-1) \n\nwhere u_min is the lowest value u is allowed to become and θ_u (01) is constant scaling factor.\n\nFinally, the penalty parameter ρ is updated as\n\nρ^(k) = begincases\nρ^(k-1)θ_ρ textif displaystyle max_j mathcalEi mathcalI Bigl vert h_j(x^(k)) vert g_i(x^(k))Bigr geq u^(k-1) Bigr) \nρ^(k-1) textelse\nendcases\n\nwhere θ_ρ (01) is a constant scaling factor.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nif not called with the ConstrainedManifoldObjective cmo\n\ng=nothing: the inequality constraints\nh=nothing: the equality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\n\nNote that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.\n\nFurther keyword arguments\n\nϵ=1e–3: the accuracy tolerance\nϵ_exponent=1/100: exponent of the ϵ update factor;\nϵ_min=1e-6: the lower bound for the accuracy tolerance\nu=1e–1: the smoothing parameter and threshold for violation of the constraints\nu_exponent=1/100: exponent of the u update factor;\nu_min=1e-6: the lower bound for the smoothing parameter and threshold for violation of the constraints\nρ=1.0: the penalty parameter\nequality_constraints=nothing: the number n of equality constraints. If not provided, a call to the gradient of g is performed to estimate these.\ngradient_range=nothing: specify how both gradients of the constraints are represented\ngradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.\ngradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.\ninequality_constraints=nothing: the number m of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.\nmin_stepsize=1e-10: the minimal step size\nsmoothing=LogarithmicSumOfExponentials: a SmoothingTechnique to use\nsub_cost=ExactPenaltyCost(problem, ρ, u; smoothing=smoothing): cost to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_grad=ExactPenaltyGrad(problem, ρ, u; smoothing=smoothing): gradient to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_stopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(ϵ)|StopWhenStepsizeLess(1e-10): a stopping cirterion for the sub solver This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_state=DefaultManoptProblem(M,ManifoldGradientObjective`(subcost, subgrad; evaluation=evaluation): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used\nstopping_criterion=StopAfterIteration(300)|(StopWhenSmallerOrEqual(ϵ, ϵ_min)&StopWhenChangeLess(1e-10) ): a functor indicating that the stopping criterion is fulfilled\n\nFor the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/exact_penalty_method/#Manopt.exact_penalty_method!","page":"Exact Penalty Method","title":"Manopt.exact_penalty_method!","text":"exact_penalty_method(M, f, grad_f, p=rand(M); kwargs...)\nexact_penalty_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\nexact_penalty_method!(M, f, grad_f, p; kwargs...)\nexact_penalty_method!(M, cmo::ConstrainedManifoldObjective, p; kwargs...)\n\nperform the exact penalty method (EPM) [LB19] The aim of the EPM is to find a solution of the constrained optimisation task\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nwhere M is a Riemannian manifold, and f, g_i_i=1^n and h_j_j=1^m are twice continuously differentiable functions from M to ℝ. For that a weighted L_1-penalty term for the violation of the constraints is added to the objective\n\nf(x) + ρbiggl( sum_i=1^m maxbigl0 g_i(x)bigr + sum_j=1^n vert h_j(x)vertbiggr)\n\nwhere ρ0 is the penalty parameter.\n\nSince this is non-smooth, a SmoothingTechnique with parameter u is applied, see the ExactPenaltyCost.\n\nIn every step k of the exact penalty method, the smoothed objective is then minimized over all p mathcal M. Then, the accuracy tolerance ϵ and the smoothing parameter u are updated by setting\n\nϵ^(k)=maxϵ_min θ_ϵ ϵ^(k-1)\n\nwhere ϵ_min is the lowest value ϵ is allowed to become and θ_ϵ (01) is constant scaling factor, and\n\nu^(k) = max u_min theta_u u^(k-1) \n\nwhere u_min is the lowest value u is allowed to become and θ_u (01) is constant scaling factor.\n\nFinally, the penalty parameter ρ is updated as\n\nρ^(k) = begincases\nρ^(k-1)θ_ρ textif displaystyle max_j mathcalEi mathcalI Bigl vert h_j(x^(k)) vert g_i(x^(k))Bigr geq u^(k-1) Bigr) \nρ^(k-1) textelse\nendcases\n\nwhere θ_ρ (01) is a constant scaling factor.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nif not called with the ConstrainedManifoldObjective cmo\n\ng=nothing: the inequality constraints\nh=nothing: the equality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\n\nNote that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.\n\nFurther keyword arguments\n\nϵ=1e–3: the accuracy tolerance\nϵ_exponent=1/100: exponent of the ϵ update factor;\nϵ_min=1e-6: the lower bound for the accuracy tolerance\nu=1e–1: the smoothing parameter and threshold for violation of the constraints\nu_exponent=1/100: exponent of the u update factor;\nu_min=1e-6: the lower bound for the smoothing parameter and threshold for violation of the constraints\nρ=1.0: the penalty parameter\nequality_constraints=nothing: the number n of equality constraints. If not provided, a call to the gradient of g is performed to estimate these.\ngradient_range=nothing: specify how both gradients of the constraints are represented\ngradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.\ngradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.\ninequality_constraints=nothing: the number m of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.\nmin_stepsize=1e-10: the minimal step size\nsmoothing=LogarithmicSumOfExponentials: a SmoothingTechnique to use\nsub_cost=ExactPenaltyCost(problem, ρ, u; smoothing=smoothing): cost to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_grad=ExactPenaltyGrad(problem, ρ, u; smoothing=smoothing): gradient to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_stopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(ϵ)|StopWhenStepsizeLess(1e-10): a stopping cirterion for the sub solver This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_state=DefaultManoptProblem(M,ManifoldGradientObjective`(subcost, subgrad; evaluation=evaluation): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used\nstopping_criterion=StopAfterIteration(300)|(StopWhenSmallerOrEqual(ϵ, ϵ_min)&StopWhenChangeLess(1e-10) ): a functor indicating that the stopping criterion is fulfilled\n\nFor the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/exact_penalty_method/#State","page":"Exact Penalty Method","title":"State","text":"","category":"section"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"ExactPenaltyMethodState","category":"page"},{"location":"solvers/exact_penalty_method/#Manopt.ExactPenaltyMethodState","page":"Exact Penalty Method","title":"Manopt.ExactPenaltyMethodState","text":"ExactPenaltyMethodState{P,T} <: AbstractManoptSolverState\n\nDescribes the exact penalty method, with\n\nFields\n\nϵ: the accuracy tolerance\nϵ_min: the lower bound for the accuracy tolerance\np::P: a point on the manifold mathcal Mstoring the current iterate\nρ: the penalty parameter\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nu: the smoothing parameter and threshold for violation of the constraints\nu_min: the lower bound for the smoothing parameter and threshold for violation of the constraints\nθ_ϵ: the scaling factor of the tolerance parameter\nθ_ρ: the scaling factor of the penalty parameter\nθ_u: the scaling factor of the smoothing parameter\n\nConstructor\n\nExactPenaltyMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)\n\nconstruct the exact penalty state.\n\nExactPenaltyMethodState(M::AbstractManifold, sub_problem;\n evaluation=AllocatingEvaluation(), kwargs...\n\n)\n\nconstruct the exact penalty state, where sub_problem is a closed form solution with evaluation as type of evaluation.\n\nKeyword arguments\n\nϵ=1e-3\nϵ_min=1e-6\nϵ_exponent=1 / 100: a shortcut for the scaling factor θ_ϵ\nθ_ϵ=(ϵ_min / ϵ)^(ϵ_exponent)\nu=1e-1\nu_min=1e-6\nu_exponent=1 / 100: a shortcut for the scaling factor θ_u.\nθ_u=(u_min / u)^(u_exponent)\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nρ=1.0\nθ_ρ=0.3\nstopping_criterion=StopAfterIteration(300)|(: a functor indicating that the stopping criterion is fulfilled StopWhenSmallerOrEqual(:ϵ, ϵ_min)|StopWhenChangeLess(1e-10) )\n\nSee also\n\nexact_penalty_method\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#Helping-functions","page":"Exact Penalty Method","title":"Helping functions","text":"","category":"section"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"ExactPenaltyCost\nExactPenaltyGrad\nSmoothingTechnique\nLinearQuadraticHuber\nLogarithmicSumOfExponentials","category":"page"},{"location":"solvers/exact_penalty_method/#Manopt.ExactPenaltyCost","page":"Exact Penalty Method","title":"Manopt.ExactPenaltyCost","text":"ExactPenaltyCost{S, Pr, R}\n\nRepresent the cost of the exact penalty method based on a ConstrainedManifoldObjective P and a parameter ρ given by\n\nf(p) + ρBigl(\n sum_i=0^m max0g_i(p) + sum_j=0^n lvert h_j(p)rvert\nBigr)\n\nwhere an additional parameter u is used as well as a smoothing technique, for example LogarithmicSumOfExponentials or LinearQuadraticHuber to obtain a smooth cost function. This struct is also a functor (M,p) -> v of the cost v.\n\nFields\n\nρ, u: as described in the mathematical formula, .\nco: the original cost\n\nConstructor\n\nExactPenaltyCost(co::ConstrainedManifoldObjective, ρ, u; smoothing=LinearQuadraticHuber())\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#Manopt.ExactPenaltyGrad","page":"Exact Penalty Method","title":"Manopt.ExactPenaltyGrad","text":"ExactPenaltyGrad{S, CO, R}\n\nRepresent the gradient of the ExactPenaltyCost based on a ConstrainedManifoldObjective co and a parameter ρ and a smoothing technique, which uses an additional parameter u.\n\nThis struct is also a functor in both formats\n\n(M, p) -> X to compute the gradient in allocating fashion.\n(M, X, p) to compute the gradient in in-place fashion.\n\nFields\n\nρ, u as stated before\nco the nonsmooth objective\n\nConstructor\n\nExactPenaltyGradient(co::ConstrainedManifoldObjective, ρ, u; smoothing=LinearQuadraticHuber())\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#Manopt.SmoothingTechnique","page":"Exact Penalty Method","title":"Manopt.SmoothingTechnique","text":"abstract type SmoothingTechnique\n\nSpecify a smoothing technique, see for example ExactPenaltyCost and ExactPenaltyGrad.\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#Manopt.LinearQuadraticHuber","page":"Exact Penalty Method","title":"Manopt.LinearQuadraticHuber","text":"LinearQuadraticHuber <: SmoothingTechnique\n\nSpecify a smoothing based on max0x mathcal P(xu) for some u, where\n\nmathcal P(x u) = begincases\n 0 text if x leq 0\n fracx^22u text if 0 leq x leq u\n x-fracu2 text if x geq u\nendcases\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#Manopt.LogarithmicSumOfExponentials","page":"Exact Penalty Method","title":"Manopt.LogarithmicSumOfExponentials","text":"LogarithmicSumOfExponentials <: SmoothingTechnique\n\nSpecify a smoothing based on maxab u log(mathrme^fracau+mathrme^fracbu) for some u.\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#sec-dr-technical-details","page":"Exact Penalty Method","title":"Technical details","text":"","category":"section"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"The exact_penalty_method solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"A `copyto!(M, q, p) and copy(M,p) for points.\nEverything the subsolver requires, which by default is the quasi_Newton method\nA zero_vector(M,p).","category":"page"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"The stopping criteria involves StopWhenChangeLess and StopWhenGradientNormLess which require","category":"page"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified or the distance(M, p, q) for said default inverse retraction.\nthe norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.","category":"page"},{"location":"solvers/exact_penalty_method/#Literature","page":"Exact Penalty Method","title":"Literature","text":"","category":"section"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.\n\n\n\n","category":"page"},{"location":"plans/#sec-plan","page":"Specify a Solver","title":"Plans for solvers","text":"","category":"section"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"For any optimisation performed in Manopt.jl information is required about both the optimisation task or “problem” at hand as well as the solver and all its parameters. This together is called a plan in Manopt.jl and it consists of two data structures:","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"The Manopt Problem describes all static data of a task, most prominently the manifold and the objective.\nThe Solver State describes all varying data and parameters for the solver that is used. This also means that each solver has its own data structure for the state.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"By splitting these two parts, one problem can be define an then be solved using different solvers.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Still there might be the need to set certain parameters within any of these structures. For that there is","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"set_parameter!\nget_parameter\nManopt.status_summary","category":"page"},{"location":"plans/#Manopt.set_parameter!","page":"Specify a Solver","title":"Manopt.set_parameter!","text":"set_parameter!(f, element::Symbol , args...)\n\nFor any f and a Symbol e, dispatch on its value so by default, to set some args... in f or one of uts sub elements.\n\n\n\n\n\nset_parameter!(element::Symbol, value::Union{String,Bool,<:Number})\n\nSet global Manopt parameters addressed by a symbol element. W This first dispatches on the value of element.\n\nThe parameters are stored to the global settings using Preferences.jl.\n\nPassing a value of \"\" deletes the corresponding entry from the preferences. Whenever the LocalPreferences.toml is modified, this is also issued as an @info.\n\n\n\n\n\nset_parameter!(amo::AbstractManifoldObjective, element::Symbol, args...)\n\nSet a certain args... from the AbstractManifoldObjective amo to value. This function should dispatch onVal(element)`.\n\nCurrently supported\n\n:Cost passes to the get_cost_function\n:Gradient passes to the get_gradient_function\n\n\n\n\n\nset_parameter!(ams::AbstractManoptProblem, element::Symbol, field::Symbol , value)\n\nSet a certain field/element from the AbstractManoptProblem ams to value. This function usually dispatches on Val(element). Instead of a single field, also a chain of elements can be provided, allowing to access encapsulated parts of the problem.\n\nMain values for element are :Manifold and :Objective.\n\n\n\n\n\nset_parameter!(ams::DebugSolverState, ::Val{:Debug}, args...)\n\nSet certain values specified by args... into the elements of the debugDictionary\n\n\n\n\n\nset_parameter!(ams::RecordSolverState, ::Val{:Record}, args...)\n\nSet certain values specified by args... into the elements of the recordDictionary\n\n\n\n\n\nset_parameter!(c::StopAfter, :MaxTime, v::Period)\n\nUpdate the time period after which an algorithm shall stop.\n\n\n\n\n\nset_parameter!(c::StopAfterIteration, :;MaxIteration, v::Int)\n\nUpdate the number of iterations after which the algorithm should stop.\n\n\n\n\n\nset_parameter!(c::StopWhenChangeLess, :MinIterateChange, v::Int)\n\nUpdate the minimal change below which an algorithm shall stop.\n\n\n\n\n\nset_parameter!(c::StopWhenCostLess, :MinCost, v)\n\nUpdate the minimal cost below which the algorithm shall stop\n\n\n\n\n\nset_parameter!(c::StopWhenEntryChangeLess, :Threshold, v)\n\nUpdate the minimal cost below which the algorithm shall stop\n\n\n\n\n\nset_parameter!(c::StopWhenGradientChangeLess, :MinGradientChange, v)\n\nUpdate the minimal change below which an algorithm shall stop.\n\n\n\n\n\nset_parameter!(c::StopWhenGradientNormLess, :MinGradNorm, v::Float64)\n\nUpdate the minimal gradient norm when an algorithm shall stop\n\n\n\n\n\nset_parameter!(c::StopWhenStepsizeLess, :MinStepsize, v)\n\nUpdate the minimal step size below which the algorithm shall stop\n\n\n\n\n\nset_parameter!(c::StopWhenSubgradientNormLess, :MinSubgradNorm, v::Float64)\n\nUpdate the minimal subgradient norm when an algorithm shall stop\n\n\n\n\n\nset_parameter!(ams::AbstractManoptSolverState, element::Symbol, args...)\n\nSet a certain field or semantic element from the AbstractManoptSolverState ams to value. This function passes to Val(element) and specific setters should dispatch on Val{element}.\n\nBy default, this function just does nothing.\n\n\n\n\n\nset_parameter!(ams::DebugSolverState, ::Val{:SubProblem}, args...)\n\nSet certain values specified by args... to the sub problem.\n\n\n\n\n\nset_parameter!(ams::DebugSolverState, ::Val{:SubState}, args...)\n\nSet certain values specified by args... to the sub state.\n\n\n\n\n\nset_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualPower, v)\n\nUpdate the residual Power θ to v.\n\n\n\n\n\nset_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualFactor, v)\n\nUpdate the residual Factor κ to v.\n\n\n\n\n\n","category":"function"},{"location":"plans/#Manopt.get_parameter","page":"Specify a Solver","title":"Manopt.get_parameter","text":"get_parameter(f, element::Symbol, args...)\n\nAccess arbitrary parameters from f addressed by a symbol element.\n\nFor any f and a Symbol e dispatch on its value by default, to get some element from f potentially further qualified by args....\n\nThis functions returns nothing if f does not have the property element\n\n\n\n\n\nget_parameter(element::Symbol; default=nothing)\n\nAccess global Manopt parameters addressed by a symbol element. This first dispatches on the value of element.\n\nIf the value is not set, default is returned.\n\nThe parameters are queried from the global settings using Preferences.jl, so they are persistent within your activated Environment.\n\nCurrently used settings\n\n:Mode the mode can be set to \"Tutorial\" to get several hints especially in scenarios, where the optimisation on manifolds is different from the usual “experience” in (classical, Euclidean) optimization. Any other value has the same effect as not setting it.\n\n\n\n\n\n","category":"function"},{"location":"plans/#Manopt.status_summary","page":"Specify a Solver","title":"Manopt.status_summary","text":"status_summary(e)\n\nReturn a string reporting about the current status of e, where e is a type from Manopt.\n\nThis method is similar to show but just returns a string. It might also be more verbose in explaining, or hide internal information.\n\n\n\n\n\n","category":"function"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"The following symbols are used.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Symbol Used in Description\n:Activity DebugWhenActive activity of the debug action stored within\n:Basepoint TangentSpace the point the tangent space is at\n:Cost generic the cost function (within an objective, as pass down)\n:Debug DebugSolverState the stored debugDictionary\n:Gradient generic the gradient function (within an objective, as pass down)\n:Iterate generic the (current) iterate, similar to set_iterate!, within a state\n:Manifold generic the manifold (within a problem, as pass down)\n:Objective generic the objective (within a problem, as pass down)\n:SubProblem generic the sub problem (within a state, as pass down)\n:SubState generic the sub state (within a state, as pass down)\n:λ ProximalDCCost, ProximalDCGrad set the proximal parameter within the proximal sub objective elements\n:Population ParticleSwarmState a certain population of points, for example particle_swarms swarm\n:Record RecordSolverState \n:TrustRegionRadius TrustRegionsState the trust region radius, equivalent to :σ\n:ρ, :u ExactPenaltyCost, ExactPenaltyGrad Parameters within the exact penalty objective\n:ρ, :μ, :λ AugmentedLagrangianCost, AugmentedLagrangianGrad Parameters of the Lagrangian function\n:p, :X LinearizedDCCost, LinearizedDCGrad Parameters withing the linearized functional used for the sub problem of the difference of convex algorithm","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Any other lower case name or letter as well as single upper case letters access fields of the corresponding first argument. for example :p could be used to access the field s.p of a state. This is often, where the iterate is stored, so the recommended way is to use :Iterate from before.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Since the iterate is often stored in the states fields s.p one could access the iterate often also with :p and similarly the gradient with :X. This is discouraged for both readability as well as to stay more generic, and it is recommended to use :Iterate and :Gradient instead in generic settings.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"You can further activate a “Tutorial” mode by set_parameter!(:Mode, \"Tutorial\"). Internally, the following convenience function is available.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Manopt.is_tutorial_mode","category":"page"},{"location":"plans/#Manopt.is_tutorial_mode","page":"Specify a Solver","title":"Manopt.is_tutorial_mode","text":"is_tutorial_mode()\n\nA small internal helper to indicate whether tutorial mode is active.\n\nYou can set the mode by calling set_parameter!(:Mode, \"Tutorial\") or deactivate it by set_parameter!(:Mode, \"\").\n\n\n\n\n\n","category":"function"},{"location":"plans/#A-factory-for-providing-manifold-defaults","page":"Specify a Solver","title":"A factory for providing manifold defaults","text":"","category":"section"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"In several cases a manifold might not yet be known at the time a (keyword) argument should be provided. Therefore, any type with a manifold default can be wrapped into a factory.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Manopt.ManifoldDefaultsFactory\nManopt._produce_type","category":"page"},{"location":"plans/#Manopt.ManifoldDefaultsFactory","page":"Specify a Solver","title":"Manopt.ManifoldDefaultsFactory","text":"ManifoldDefaultsFactory{M,T,A,K}\n\nA generic factory to postpone the instantiation of certain types from within Manopt.jl, in order to be able to adapt it to defaults from different manifolds and/or postpone the decission on which manifold to use to a later point\n\nFor now this is established for\n\nDirectionUpdateRules (TODO: WIP)\nStepsize (TODO: WIP)\nStoppingCriterion (TODO:WIP)\n\nThis factory stores necessary and optional parameters as well as keyword arguments provided by the user to later produce the type this factory is for.\n\nBesides a manifold as a fallback, the factory can also be used for the (maybe simpler) types from the list of types that do not require the manifold.\n\nFields\n\nM::Union{Nothing,AbstractManifold}: provide a manifold for defaults\nargs::A: arguments (args...) that are passed to the type constructor\nkwargs::K: keyword arguments (kwargs...) that are passed to the type constructor\nconstructor_requires_manifold::Bool: indicate whether the type construtor requires the manifold or not\n\nConstructor\n\nManifoldDefaultsFactory(T, args...; kwargs...)\nManifoldDefaultsFactory(T, M, args...; kwargs...)\n\nInput\n\nT a subtype of types listed above that this factory is to produce\nM (optional) a manifold used for the defaults in case no manifold is provided.\nargs... arguments to pass to the constructor of T\nkwargs... keyword arguments to pass (overwrite) when constructing T.\n\nKeyword arguments\n\nrequires_manifold=true: indicate whether the type constructor this factory wraps requires the manifold as first argument or not.\n\nAll other keyword arguments are internally stored to be used in the type constructor\n\nas well as arguments and keyword arguments for the update rule.\n\nsee also\n\n_produce_type\n\n\n\n\n\n","category":"type"},{"location":"plans/#Manopt._produce_type","page":"Specify a Solver","title":"Manopt._produce_type","text":"_produce_type(t::T, M::AbstractManifold)\n_produce_type(t::ManifoldDefaultsFactory{T}, M::AbstractManifold)\n\nUse the ManifoldDefaultsFactory{T} to produce an instance of type T. This acts transparent in the way that if you provide an instance t::T already, this will just be returned.\n\n\n\n\n\n","category":"function"},{"location":"tutorials/ConstrainedOptimization/#How-to-do-constrained-optimization","page":"Do constrained optimization","title":"How to do constrained optimization","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"This tutorial is a short introduction to using solvers for constraint optimisation in Manopt.jl.","category":"page"},{"location":"tutorials/ConstrainedOptimization/#Introduction","page":"Do constrained optimization","title":"Introduction","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"A constraint optimisation problem is given by","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"tagP\nbeginalign*\noperatorname*argmin_pmathcal M f(p)\ntextsuch that quad g(p) leq 0\nquad h(p) = 0\nendalign*","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"where f mathcal M ℝ is a cost function, and g mathcal M ℝ^m and h mathcal M ℝ^n are the inequality and equality constraints, respectively. The leq and = in (P) are meant element-wise.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"This can be seen as a balance between moving constraints into the geometry of a manifold mathcal M and keeping some, since they can be handled well in algorithms, see [BH19], [LB19] for details.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"using Distributions, LinearAlgebra, Manifolds, Manopt, Random\nRandom.seed!(42);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"In this tutorial we want to look at different ways to specify the problem and its implications. We start with specifying an example problems to illustrate the different available forms.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We consider the problem of a Nonnegative PCA, cf. Section 5.1.2 in [LB19]","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"let v_0 ℝ^d, lVert v_0 rVert=1 be given spike signal, that is a signal that is sparse with only s=lfloor δd rfloor nonzero entries.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Z = sqrtσ v_0v_0^mathrmT+N","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"where sigma is a signal-to-noise ratio and N is a matrix with random entries, where the diagonal entries are distributed with zero mean and standard deviation 1d on the off-diagonals and 2d on the diagonal","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"d = 150; # dimension of v0\nσ = 0.1^2; # SNR\nδ = 0.1; sp = Int(floor(δ * d)); # Sparsity\nS = sample(1:d, sp; replace=false);\nv0 = [i ∈ S ? 1 / sqrt(sp) : 0.0 for i in 1:d];\nN = rand(Normal(0, 1 / d), (d, d)); N[diagind(N, 0)] .= rand(Normal(0, 2 / d), d);\nZ = Z = sqrt(σ) * v0 * transpose(v0) + N;","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"In order to recover v_0 we consider the constrained optimisation problem on the sphere mathcal S^d-1 given by","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"beginalign*\noperatorname*argmin_pmathcal S^d-1 -p^mathrmTZp^mathrmT\ntextsuch that quad p geq 0\nendalign*","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"or in the previous notation f(p) = -p^mathrmTZp^mathrmT and g(p) = -p. We first initialize the manifold under consideration","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"M = Sphere(d - 1)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Sphere(149, ℝ)","category":"page"},{"location":"tutorials/ConstrainedOptimization/#A-first-augmented-Lagrangian-run","page":"Do constrained optimization","title":"A first augmented Lagrangian run","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We first defined f and g as usual functions","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, p) = -transpose(p) * Z * p;\ng(M, p) = -p;","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"since f is a functions defined in the embedding ℝ^d as well, we obtain its gradient by projection.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"grad_f(M, p) = project(M, p, -transpose(Z) * p - Z * p);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"For the constraints this is a little more involved, since each function g_i=g(p)_i=p_i has to return its own gradient. These are again in the embedding just operatornamegrad g_i(p) = -e_i the i th unit vector. We can project these again onto the tangent space at p:","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"grad_g(M, p) = project.(\n Ref(M), Ref(p), [[i == j ? -1.0 : 0.0 for j in 1:d] for i in 1:d]\n);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We further start in a random point:","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"p0 = rand(M);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Let’s verify a few things for the initial point","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, p0)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"0.005667399180991248","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"How much the function g is positive","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum(g(M, p0))","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"0.17885478285466855","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Now as a first method we can just call the Augmented Lagrangian Method with a simple call:","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"@time v1 = augmented_Lagrangian_method(\n M, f, grad_f, p0; g=g, grad_g=grad_g,\n debug=[:Iteration, :Cost, :Stop, \" | \", (:Change, \"Δp : %1.5e\"), 20, \"\\n\"],\n stopping_criterion = StopAfterIteration(300) | (\n StopWhenSmallerOrEqual(:ϵ, 1e-5) & StopWhenChangeLess(M, 1e-8)\n )\n);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Initial f(x): 0.005667 | \n# 20 f(x): -0.123557 | Δp : 1.00133e+00\n# 40 f(x): -0.123557 | Δp : 3.77088e-08\n# 60 f(x): -0.123557 | Δp : 2.40619e-05\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-5).\nAt iteration 68 the algorithm performed a step with a change (7.600544776224794e-11) less than 9.77237220955808e-6.\n 6.139017 seconds (18.82 M allocations: 1.489 GiB, 5.76% gc time, 97.49% compilation time)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Now we have both a lower function value and the point is nearly within the constraints, namely up to numerical inaccuracies","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, v1)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-0.12353580883894738","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum( g(M, v1) )","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"4.577229036010474e-12","category":"page"},{"location":"tutorials/ConstrainedOptimization/#A-faster-augmented-Lagrangian-run","page":"Do constrained optimization","title":"A faster augmented Lagrangian run","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Now this is a little slow, so we can modify two things:","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Gradients should be evaluated in place, so for example","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"grad_f!(M, X, p) = project!(M, X, p, -transpose(Z) * p - Z * p);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"The constraints are currently always evaluated all together, since the function grad_g always returns a vector of gradients. We first change the constraints function into a vector of functions. We further change the gradient both into a vector of gradient functions operatornamegrad g_ii=1ldotsd, as well as gradients that are computed in place.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"g2 = [(M, p) -> -p[i] for i in 1:d];\ngrad_g2! = [\n (M, X, p) -> project!(M, X, p, [i == j ? -1.0 : 0.0 for j in 1:d]) for i in 1:d\n];","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We obtain","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"@time v2 = augmented_Lagrangian_method(\n M, f, grad_f!, p0; g=g2, grad_g=grad_g2!, evaluation=InplaceEvaluation(),\n debug=[:Iteration, :Cost, :Stop, \" | \", (:Change, \"Δp : %1.5e\"), 20, \"\\n\"],\n stopping_criterion = StopAfterIteration(300) | (\n StopWhenSmallerOrEqual(:ϵ, 1e-5) & StopWhenChangeLess(M, 1e-8)\n )\n );","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Initial f(x): 0.005667 | \n# 20 f(x): -0.123557 | Δp : 1.00133e+00\n# 40 f(x): -0.123557 | Δp : 3.77088e-08\n# 60 f(x): -0.123557 | Δp : 2.40619e-05\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-5).\nAt iteration 68 the algorithm performed a step with a change (7.600544776224794e-11) less than 9.77237220955808e-6.\n 2.378452 seconds (7.40 M allocations: 748.106 MiB, 3.43% gc time, 94.95% compilation time)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"As a technical remark: note that (by default) the change to InplaceEvaluations affects both the constrained solver as well as the inner solver of the subproblem in each iteration.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, v2)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-0.12353580883894738","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum(g(M, v2))","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"4.577229036010474e-12","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"These are the very similar to the previous values but the solver took much less time and less memory allocations.","category":"page"},{"location":"tutorials/ConstrainedOptimization/#Exact-penalty-method","page":"Do constrained optimization","title":"Exact penalty method","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"As a second solver, we have the Exact Penalty Method, which currently is available with two smoothing variants, which make an inner solver for smooth optimization, that is by default again [quasi Newton] possible: LogarithmicSumOfExponentials and LinearQuadraticHuber. We compare both here as well. The first smoothing technique is the default, so we can just call","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"@time v3 = exact_penalty_method(\n M, f, grad_f!, p0; g=g2, grad_g=grad_g2!, evaluation=InplaceEvaluation(),\n debug=[:Iteration, :Cost, :Stop, \" | \", :Change, 50, \"\\n\"],\n);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Initial f(x): 0.005667 | \n# 50 f(x): -0.122792 | Last Change: 0.982159\n# 100 f(x): -0.123555 | Last Change: 0.013515\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (3.0244885037602495e-7) less than 1.0e-6.\n 2.743942 seconds (14.51 M allocations: 4.764 GiB, 8.96% gc time, 65.84% compilation time)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We obtain a similar cost value as for the Augmented Lagrangian Solver from before, but here the constraint is actually fulfilled and not just numerically “on the boundary”.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, v3)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-0.12355544268449432","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum(g(M, v3))","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-3.589798060999793e-6","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"The second smoothing technique is often beneficial, when we have a lot of constraints (in the previously mentioned vectorial manner), since we can avoid several gradient evaluations for the constraint functions here. This leads to a faster iteration time.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"@time v4 = exact_penalty_method(\n M, f, grad_f!, p0; g=g2, grad_g=grad_g2!,\n evaluation=InplaceEvaluation(),\n smoothing=LinearQuadraticHuber(),\n debug=[:Iteration, :Cost, :Stop, \" | \", :Change, 50, \"\\n\"],\n);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Initial f(x): 0.005667 | \n# 50 f(x): -0.123559 | Last Change: 0.008024\n# 100 f(x): -0.123557 | Last Change: 0.000026\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 101 the algorithm performed a step with a change (1.0069976577931588e-8) less than 1.0e-6.\n 2.161071 seconds (9.44 M allocations: 2.176 GiB, 6.59% gc time, 84.28% compilation time)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"For the result we see the same behaviour as for the other smoothing.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, v4)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-0.12355667846565418","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum(g(M, v4))","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"2.6974802196316014e-8","category":"page"},{"location":"tutorials/ConstrainedOptimization/#Comparing-to-the-unconstrained-solver","page":"Do constrained optimization","title":"Comparing to the unconstrained solver","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We can compare this to the global optimum on the sphere, which is the unconstrained optimisation problem, where we can just use Quasi Newton.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Note that this is much faster, since every iteration of the algorithm does a quasi-Newton call as well.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"@time w1 = quasi_Newton(\n M, f, grad_f!, p0; evaluation=InplaceEvaluation()\n);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":" 0.740804 seconds (1.92 M allocations: 115.362 MiB, 2.26% gc time, 96.83% compilation time)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, w1)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-0.13990874034056555","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"But for sure here the constraints here are not fulfilled and we have quite positive entries in g(w_1)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum(g(M, w1))","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"0.11803200739746737","category":"page"},{"location":"tutorials/ConstrainedOptimization/#Technical-details","page":"Do constrained optimization","title":"Technical details","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"2024-11-21T20:35:55.524","category":"page"},{"location":"tutorials/ConstrainedOptimization/#Literature","page":"Do constrained optimization","title":"Literature","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"R. Bergmann and R. Herzog. Intrinsic formulation of KKT conditions and constraint qualifications on smooth manifolds. SIAM Journal on Optimization 29, 2423–2444 (2019), arXiv:1804.06214.\n\n\n\nC. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.\n\n\n\n","category":"page"},{"location":"helpers/exports/#sec-exports","page":"Exports","title":"Exports","text":"","category":"section"},{"location":"helpers/exports/","page":"Exports","title":"Exports","text":"Exports aim to provide a consistent generation of images of your results. For example if you record the trace your algorithm walks on the Sphere, you can easily export this trace to a rendered image using asymptote_export_S2_signals and render the result with Asymptote. Despite these, you can always record values during your iterations, and export these, for example to csv.","category":"page"},{"location":"helpers/exports/#Asymptote","page":"Exports","title":"Asymptote","text":"","category":"section"},{"location":"helpers/exports/","page":"Exports","title":"Exports","text":"The following functions provide exports both in graphics and/or raw data using Asymptote.","category":"page"},{"location":"helpers/exports/","page":"Exports","title":"Exports","text":"Modules = [Manopt]\nPages = [\"Asymptote.jl\"]","category":"page"},{"location":"helpers/exports/#Manopt.asymptote_export_S2_data-Tuple{String}","page":"Exports","title":"Manopt.asymptote_export_S2_data","text":"asymptote_export_S2_data(filename)\n\nExport given data as an array of points on the 2-sphere, which might be one-, two- or three-dimensional data with points on the Sphere mathbb S^2.\n\nInput\n\nfilename a file to store the Asymptote code in.\n\nOptional arguments for the data\n\ndata a point representing the 1D,2D, or 3D array of points\nelevation_color_scheme A ColorScheme for elevation\nscale_axes=(1/3,1/3,1/3): move spheres closer to each other by a factor per direction\n\nOptional arguments for asymptote\n\narrow_head_size=1.8: size of the arrowheads of the vectors (in mm)\ncamera_position position of the camera scene (default: atop the center of the data in the xy-plane)\ntarget position the camera points at (default: center of xy-plane within data).\n\n\n\n\n\n","category":"method"},{"location":"helpers/exports/#Manopt.asymptote_export_S2_signals-Tuple{String}","page":"Exports","title":"Manopt.asymptote_export_S2_signals","text":"asymptote_export_S2_signals(filename; points, curves, tangent_vectors, colors, kwargs...)\n\nExport given points, curves, and tangent_vectors on the sphere mathbb S^2 to Asymptote.\n\nInput\n\nfilename a file to store the Asymptote code in.\n\nKeywaord arguments for the data\n\ncolors=Dict{Symbol,Array{RGBA{Float64},1}}(): dictionary of color arrays, indexed by symbols :points, :curves and :tvector, where each entry has to provide as least as many colors as the length of the corresponding sets.\ncurves=Array{Array{Float64,1},1}(undef, 0): an Array of Arrays of points on the sphere, where each inner array is interpreted as a curve and is accompanied by an entry within colors.\npoints=Array{Array{Float64,1},1}(undef, 0): an Array of Arrays of points on the sphere where each inner array is interpreted as a set of points and is accompanied by an entry within colors.\ntangent_vectors=Array{Array{Tuple{Float64,Float64},1},1}(undef, 0): an Array of Arrays of tuples, where the first is a points, the second a tangent vector and each set of vectors is accompanied by an entry from within colors.\n\nKeyword arguments for asymptote\n\narrow_head_size=6.0: size of the arrowheads of the tangent vectors\narrow_head_sizes overrides the previous value to specify a value per tVector` set.\ncamera_position=(1., 1., 0.): position of the camera in the Asymptote scene\nline_width=1.0: size of the lines used to draw the curves.\nline_widths overrides the previous value to specify a value per curve and tVector` set.\ndot_size=1.0: size of the dots used to draw the points.\ndot_sizes overrides the previous value to specify a value per point set.\nsize=nothing: a tuple for the image size, otherwise a relative size 4cm is used.\nsphere_color=RGBA{Float64}(0.85, 0.85, 0.85, 0.6): color of the sphere the data is drawn on\nsphere_line_color=RGBA{Float64}(0.75, 0.75, 0.75, 0.6): color of the lines on the sphere\nsphere_line_width=0.5: line width of the lines on the sphere\ntarget=(0.,0.,0.): position the camera points at\n\n\n\n\n\n","category":"method"},{"location":"helpers/exports/#Manopt.asymptote_export_SPD-Tuple{String}","page":"Exports","title":"Manopt.asymptote_export_SPD","text":"asymptote_export_SPD(filename)\n\nexport given data as a point on a Power(SymmetricPOsitiveDefinnite(3))} manifold of one-, two- or three-dimensional data with points on the manifold of symmetric positive definite matrices.\n\nInput\n\nfilename a file to store the Asymptote code in.\n\nOptional arguments for the data\n\ndata a point representing the 1D, 2D, or 3D array of SPD matrices\ncolor_scheme a ColorScheme for Geometric Anisotropy Index\nscale_axes=(1/3,1/3,1/3): move symmetric positive definite matrices closer to each other by a factor per direction compared to the distance estimated by the maximal eigenvalue of all involved SPD points\n\nOptional arguments for asymptote\n\ncamera_position position of the camera scene (default: atop the center of the data in the xy-plane)\ntarget position the camera points at (default: center of xy-plane within data).\n\nBoth values camera_position and target are scaled by scaledAxes*EW, where EW is the maximal eigenvalue in the data.\n\n\n\n\n\n","category":"method"},{"location":"helpers/exports/#Manopt.render_asymptote-Tuple{Any}","page":"Exports","title":"Manopt.render_asymptote","text":"render_asymptote(filename; render=4, format=\"png\", ...)\n\nrender an exported asymptote file specified in the filename, which can also be given as a relative or full path\n\nInput\n\nfilename filename of the exported asy and rendered image\n\nKeyword arguments\n\nthe default values are given in brackets\n\nrender=4: render level of asymptote passed to its -render option. This can be removed from the command by setting it to nothing.\nformat=\"png\": final rendered format passed to the -f option\nexport_file: (the filename with format as ending) specify the export filename\n\n\n\n\n\n","category":"method"},{"location":"plans/problem/#sec-problem","page":"Problem","title":"A Manopt problem","text":"","category":"section"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"A problem describes all static data of an optimisation task and has as a super type","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"AbstractManoptProblem\nget_objective\nget_manifold","category":"page"},{"location":"plans/problem/#Manopt.AbstractManoptProblem","page":"Problem","title":"Manopt.AbstractManoptProblem","text":"AbstractManoptProblem{M<:AbstractManifold}\n\nDescribe a Riemannian optimization problem with all static (not-changing) properties.\n\nThe most prominent features that should always be stated here are\n\nthe AbstractManifold mathcal M\nthe cost function f mathcal M ℝ\n\nUsually the cost should be within an AbstractManifoldObjective.\n\n\n\n\n\n","category":"type"},{"location":"plans/problem/#Manopt.get_objective","page":"Problem","title":"Manopt.get_objective","text":"get_objective(o::AbstractManifoldObjective, recursive=true)\n\nreturn the (one step) undecorated AbstractManifoldObjective of the (possibly) decorated o. As long as your decorated objective stores the objective within o.objective and the dispatch_objective_decorator is set to Val{true}, the internal state are extracted automatically.\n\nBy default the objective that is stored within a decorated objective is assumed to be at o.objective. Overwrite _get_objective(o, ::Val{true}, recursive) to change this behaviour for your objectiveo` for both the recursive and the direct case.\n\nIf recursive is set to false, only the most outer decorator is taken away instead of all.\n\n\n\n\n\nget_objective(mp::AbstractManoptProblem, recursive=false)\n\nreturn the objective AbstractManifoldObjective stored within an AbstractManoptProblem. If recursive is set to true, it additionally unwraps all decorators of the objective\n\n\n\n\n\nget_objective(amso::AbstractManifoldSubObjective)\n\nReturn the (original) objective stored the sub objective is build on.\n\n\n\n\n\n","category":"function"},{"location":"plans/problem/#Manopt.get_manifold","page":"Problem","title":"Manopt.get_manifold","text":"get_manifold(amp::AbstractManoptProblem)\n\nreturn the manifold stored within an AbstractManoptProblem\n\n\n\n\n\n","category":"function"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"Usually, such a problem is determined by the manifold or domain of the optimisation and the objective with all its properties used within an algorithm, see The Objective. For that one can just use","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"DefaultManoptProblem","category":"page"},{"location":"plans/problem/#Manopt.DefaultManoptProblem","page":"Problem","title":"Manopt.DefaultManoptProblem","text":"DefaultManoptProblem{TM <: AbstractManifold, Objective <: AbstractManifoldObjective}\n\nModel a default manifold problem, that (just) consists of the domain of optimisation, that is an AbstractManifold and an AbstractManifoldObjective\n\n\n\n\n\n","category":"type"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"For the constraint optimisation, there are different possibilities to represent the gradients of the constraints. This can be done with a","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"ConstraintProblem","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"The primal dual-based solvers (Chambolle-Pock and the PD Semi-smooth Newton), both need two manifolds as their domains, hence there also exists a","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"TwoManifoldProblem","category":"page"},{"location":"plans/problem/#Manopt.TwoManifoldProblem","page":"Problem","title":"Manopt.TwoManifoldProblem","text":"TwoManifoldProblem{\n MT<:AbstractManifold,NT<:AbstractManifold,O<:AbstractManifoldObjective\n} <: AbstractManoptProblem{MT}\n\nAn abstract type for primal-dual-based problems.\n\n\n\n\n\n","category":"type"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"From the two ingredients here, you can find more information about","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"the ManifoldsBase.AbstractManifold in ManifoldsBase.jl\nthe AbstractManifoldObjective on the page about the objective.","category":"page"},{"location":"solvers/quasi_Newton/#Riemannian-quasi-Newton-methods","page":"Quasi-Newton","title":"Riemannian quasi-Newton methods","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":" CurrentModule = Manopt","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":" quasi_Newton\n quasi_Newton!","category":"page"},{"location":"solvers/quasi_Newton/#Manopt.quasi_Newton","page":"Quasi-Newton","title":"Manopt.quasi_Newton","text":"quasi_Newton(M, f, grad_f, p; kwargs...)\nquasi_Newton!(M, f, grad_f, p; kwargs...)\n\nPerform a quasi Newton iteration to solve\n\noperatornameargmin_p mathcal M f(p)\n\nwith start point p. The iterations can be done in-place of p=p^(0). The kth iteration consists of\n\nCompute the search direction η^(k) = -mathcal B_k operatornamegradf (p^(k)) or solve mathcal H_k η^(k) = -operatornamegradf (p^(k)).\nDetermine a suitable stepsize α_k along the curve γ(α) = R_p^(k)(α η^(k)), usually by using WolfePowellLinesearch.\nCompute p^(k+1) = R_p^(k)(α_k η^(k)).\nDefine s_k = mathcal T_p^(k) α_k η^(k)(α_k η^(k)) and y_k = operatornamegradf(p^(k+1)) - mathcal T_p^(k) α_k η^(k)(operatornamegradf(p^(k))), where mathcal T denotes a vector transport.\nCompute the new approximate Hessian H_k+1 or its inverse B_k+1.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nbasis=DefaultOrthonormalBasis(): basis to use within each of the the tangent spaces to represent the Hessian (inverse) for the cases where it is stored in full (matrix) form.\ncautious_update=false: whether or not to use the QuasiNewtonCautiousDirectionUpdate which wraps the direction_upate.\ncautious_function=(x) -> x * 1e-4: a monotone increasing function for the cautious update that is zero at x=0 and strictly increasing at 0\ndirection_update=InverseBFGS(): the AbstractQuasiNewtonUpdateRule to use.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.\ninitial_operator= initial_scale*Matrix{Float64}(I, n, n): initial matrix to use in case the Hessian (inverse) approximation is stored as a full matrix, that is n=manifold_dimension(M). This matrix is only allocated for the full matrix case. See also initial_scale.\ninitial_scale=1.0: scale initial s to use in with fracss_ky_k_p_klVert y_krVert_p_k in the computation of the limited memory approach. see also initial_operator\nmemory_size=20: limited memory, number of s_k y_k to store. Set to a negative value to use a full memory (matrix) representation\nnondescent_direction_behavior=:reinitialize_direction_update: specify how non-descent direction is handled. This can be\n:step_towards_negative_gradient: the direction is replaced with negative gradient, a message is stored.\n:ignore: the verification is not performed, so any computed direction is accepted. No message is stored.\n:reinitialize_direction_update: discards operator state stored in direction update rules.\nany other value performs the verification, keeps the direction but stores a message.\nA stored message can be displayed using DebugMessages.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=WolfePowellLinesearch(retraction_method, vector_transport_method): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(max(1000, memory_size))|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/quasi_Newton/#Manopt.quasi_Newton!","page":"Quasi-Newton","title":"Manopt.quasi_Newton!","text":"quasi_Newton(M, f, grad_f, p; kwargs...)\nquasi_Newton!(M, f, grad_f, p; kwargs...)\n\nPerform a quasi Newton iteration to solve\n\noperatornameargmin_p mathcal M f(p)\n\nwith start point p. The iterations can be done in-place of p=p^(0). The kth iteration consists of\n\nCompute the search direction η^(k) = -mathcal B_k operatornamegradf (p^(k)) or solve mathcal H_k η^(k) = -operatornamegradf (p^(k)).\nDetermine a suitable stepsize α_k along the curve γ(α) = R_p^(k)(α η^(k)), usually by using WolfePowellLinesearch.\nCompute p^(k+1) = R_p^(k)(α_k η^(k)).\nDefine s_k = mathcal T_p^(k) α_k η^(k)(α_k η^(k)) and y_k = operatornamegradf(p^(k+1)) - mathcal T_p^(k) α_k η^(k)(operatornamegradf(p^(k))), where mathcal T denotes a vector transport.\nCompute the new approximate Hessian H_k+1 or its inverse B_k+1.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nbasis=DefaultOrthonormalBasis(): basis to use within each of the the tangent spaces to represent the Hessian (inverse) for the cases where it is stored in full (matrix) form.\ncautious_update=false: whether or not to use the QuasiNewtonCautiousDirectionUpdate which wraps the direction_upate.\ncautious_function=(x) -> x * 1e-4: a monotone increasing function for the cautious update that is zero at x=0 and strictly increasing at 0\ndirection_update=InverseBFGS(): the AbstractQuasiNewtonUpdateRule to use.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.\ninitial_operator= initial_scale*Matrix{Float64}(I, n, n): initial matrix to use in case the Hessian (inverse) approximation is stored as a full matrix, that is n=manifold_dimension(M). This matrix is only allocated for the full matrix case. See also initial_scale.\ninitial_scale=1.0: scale initial s to use in with fracss_ky_k_p_klVert y_krVert_p_k in the computation of the limited memory approach. see also initial_operator\nmemory_size=20: limited memory, number of s_k y_k to store. Set to a negative value to use a full memory (matrix) representation\nnondescent_direction_behavior=:reinitialize_direction_update: specify how non-descent direction is handled. This can be\n:step_towards_negative_gradient: the direction is replaced with negative gradient, a message is stored.\n:ignore: the verification is not performed, so any computed direction is accepted. No message is stored.\n:reinitialize_direction_update: discards operator state stored in direction update rules.\nany other value performs the verification, keeps the direction but stores a message.\nA stored message can be displayed using DebugMessages.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=WolfePowellLinesearch(retraction_method, vector_transport_method): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(max(1000, memory_size))|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/quasi_Newton/#Background","page":"Quasi-Newton","title":"Background","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"The aim is to minimize a real-valued function on a Riemannian manifold, that is","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"min f(x) quad x mathcalM","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"Riemannian quasi-Newtonian methods are as generalizations of their Euclidean counterparts Riemannian line search methods. These methods determine a search direction η_k T_x_k mathcalM at the current iterate x_k and a suitable stepsize α_k along gamma(α) = R_x_k(α η_k), where R T mathcalM mathcalM is a retraction. The next iterate is obtained by","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"x_k+1 = R_x_k(α_k η_k)","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"In quasi-Newton methods, the search direction is given by","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"η_k = -mathcalH_k^-1operatornamegradf (x_k) = -mathcalB_k operatornamegrad (x_k)","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"where mathcalH_k T_x_k mathcalM T_x_k mathcalM is a positive definite self-adjoint operator, which approximates the action of the Hessian operatornameHess f (x_k) and mathcalB_k = mathcalH_k^-1. The idea of quasi-Newton methods is instead of creating a complete new approximation of the Hessian operator operatornameHess f(x_k+1) or its inverse at every iteration, the previous operator mathcalH_k or mathcalB_k is updated by a convenient formula using the obtained information about the curvature of the objective function during the iteration. The resulting operator mathcalH_k+1 or mathcalB_k+1 acts on the tangent space T_x_k+1 mathcalM of the freshly computed iterate x_k+1. In order to get a well-defined method, the following requirements are placed on the new operator mathcalH_k+1 or mathcalB_k+1 that is created by an update. Since the Hessian operatornameHess f(x_k+1) is a self-adjoint operator on the tangent space T_x_k+1 mathcalM, and mathcalH_k+1 approximates it, one requirement is, that mathcalH_k+1 or mathcalB_k+1 is also self-adjoint on T_x_k+1 mathcalM. In order to achieve a steady descent, the next requirement is that η_k is a descent direction in each iteration. Hence a further requirement is that mathcalH_k+1 or mathcalB_k+1 is a positive definite operator on T_x_k+1 mathcalM. In order to get information about the curvature of the objective function into the new operator mathcalH_k+1 or mathcalB_k+1, the last requirement is a form of a Riemannian quasi-Newton equation:","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"mathcalH_k+1 T_x_k rightarrow x_k+1(R_x_k^-1(x_k+1)) = operatornamegrad(x_k+1) - T_x_k rightarrow x_k+1(operatornamegradf(x_k))","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"or","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"mathcalB_k+1 operatornamegradf(x_k+1) - T_x_k rightarrow x_k+1(operatornamegradf(x_k)) = T_x_k rightarrow x_k+1(R_x_k^-1(x_k+1))","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"where T_x_k rightarrow x_k+1 T_x_k mathcalM T_x_k+1 mathcalM and the chosen retraction R is the associated retraction of T. Note that, of course, not all updates in all situations meet these conditions in every iteration. For specific quasi-Newton updates, the fulfilment of the Riemannian curvature condition, which requires that","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"g_x_k+1(s_k y_k) 0","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"holds, is a requirement for the inheritance of the self-adjointness and positive definiteness of the mathcalH_k or mathcalB_k to the operator mathcalH_k+1 or mathcalB_k+1. Unfortunately, the fulfilment of the Riemannian curvature condition is not given by a step size alpha_k 0 that satisfies the generalized Wolfe conditions. However, to create a positive definite operator mathcalH_k+1 or mathcalB_k+1 in each iteration, the so-called locking condition was introduced in [HGA15], which requires that the isometric vector transport T^S, which is used in the update formula, and its associate retraction R fulfil","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"T^Sx ξ_x(ξ_x) = β T^Rx ξ_x(ξ_x) quad β = fraclVert ξ_x rVert_xlVert T^Rx ξ_x(ξ_x) rVert_R_x(ξ_x)","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"where T^R is the vector transport by differentiated retraction. With the requirement that the isometric vector transport T^S and its associated retraction R satisfies the locking condition and using the tangent vector","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"y_k = β_k^-1 operatornamegradf(x_k+1) - T^Sx_k α_k η_k(operatornamegradf(x_k))","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"where","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"β_k = fraclVert α_k η_k rVert_x_klVert T^Rx_k α_k η_k(α_k η_k) rVert_x_k+1","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"in the update, it can be shown that choosing a stepsize α_k 0 that satisfies the Riemannian Wolfe conditions leads to the fulfilment of the Riemannian curvature condition, which in turn implies that the operator generated by the updates is positive definite. In the following the specific operators are denoted in matrix notation and hence use H_k and B_k, respectively.","category":"page"},{"location":"solvers/quasi_Newton/#Direction-updates","page":"Quasi-Newton","title":"Direction updates","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"In general there are different ways to compute a fixed AbstractQuasiNewtonUpdateRule. In general these are represented by","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"AbstractQuasiNewtonDirectionUpdate\nQuasiNewtonMatrixDirectionUpdate\nQuasiNewtonLimitedMemoryDirectionUpdate\nQuasiNewtonCautiousDirectionUpdate\nManopt.initialize_update!","category":"page"},{"location":"solvers/quasi_Newton/#Manopt.AbstractQuasiNewtonDirectionUpdate","page":"Quasi-Newton","title":"Manopt.AbstractQuasiNewtonDirectionUpdate","text":"AbstractQuasiNewtonDirectionUpdate\n\nAn abstract representation of an Quasi Newton Update rule to determine the next direction given current QuasiNewtonState.\n\nAll subtypes should be functors, they should be callable as H(M,x,d) to compute a new direction update.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.QuasiNewtonMatrixDirectionUpdate","page":"Quasi-Newton","title":"Manopt.QuasiNewtonMatrixDirectionUpdate","text":"QuasiNewtonMatrixDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate\n\nThe QuasiNewtonMatrixDirectionUpdate represent a quasi-Newton update rule, where the operator is stored as a matrix. A distinction is made between the update of the approximation of the Hessian, H_k mapsto H_k+1, and the update of the approximation of the Hessian inverse, B_k mapsto B_k+1. For the first case, the coordinates of the search direction η_k with respect to a basis b_i_i=1^n are determined by solving a linear system of equations\n\ntextSolve quad hatη_k = - H_k widehatoperatornamegradf(x_k)\n\nwhere H_k is the matrix representing the operator with respect to the basis b_i_i=1^n and widehatoperatornamegrad f(p_k) represents the coordinates of the gradient of the objective function f in x_k with respect to the basis b_i_i=1^n. If a method is chosen where Hessian inverse is approximated, the coordinates of the search direction η_k with respect to a basis b_i_i=1^n are obtained simply by matrix-vector multiplication\n\nhatη_k = - B_k widehatoperatornamegradf(x_k)\n\nwhere B_k is the matrix representing the operator with respect to the basis b_i_i=1^n and \\widehat{\\operatorname{grad}} f(p_k)}. In the end, the search directionη_kis generated from the coordinates\\hat{eta_k}and the vectors of the basis\\{b_i\\}_{i=1}^{n}in both variants. The [AbstractQuasiNewtonUpdateRule](@ref) indicates which quasi-Newton update rule is used. In all of them, the Euclidean update formula is used to generate the matrixH_{k+1}andB_{k+1}, and the basis\\{b_i\\}_{i=1}^{n}is transported into the upcoming tangent spaceT_{p_{k+1}} \\mathcal M`, preferably with an isometric vector transport, or generated there.\n\nProvided functors\n\n(mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction\n(η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η\n\nFields\n\nbasis: an AbstractBasis to use in the tangent spaces\nmatrix: the matrix which represents the approximating operator.\ninitial_scale: when initialising the update, a unit matrix is used as initial approximation, scaled by this factor\nupdate: a AbstractQuasiNewtonUpdateRule.\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nQuasiNewtonMatrixDirectionUpdate(\n M::AbstractManifold,\n update,\n basis::B=DefaultOrthonormalBasis(),\n m=Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M));\n kwargs...\n)\n\nKeyword arguments\n\ninitial_scale=1.0\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nGenerate the Update rule with defaults from a manifold and the names corresponding to the fields.\n\nSee also\n\nQuasiNewtonLimitedMemoryDirectionUpdate, QuasiNewtonCautiousDirectionUpdate, AbstractQuasiNewtonDirectionUpdate,\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.QuasiNewtonLimitedMemoryDirectionUpdate","page":"Quasi-Newton","title":"Manopt.QuasiNewtonLimitedMemoryDirectionUpdate","text":"QuasiNewtonLimitedMemoryDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate\n\nThis AbstractQuasiNewtonDirectionUpdate represents the limited-memory Riemannian BFGS update, where the approximating operator is represented by m stored pairs of tangent vectors widehats_i_i=k-m^k-1 and widehaty_i_i=k-m^k-1 in thek-th iteration For the calculation of the search directionXk the generalisation of the two-loop recursion is used (see HuangGallivanAbsil2015(cite)) since it only requires inner products and linear combinations of tangent vectors inT{pk}\\mathcal M For that the stored pairs of tangent vectors\\widehat{s}i, \\widehat{y}i the gradient\\operatorname{grad} f(pk)of the objective functionfinp_k`` and the positive definite self-adjoint operator\n\nmathcalB^(0)_k = fracg_p_k(s_k-1 y_k-1)g_p_k(y_k-1 y_k-1) mathrmid_T_p_k mathcalM\n\nare used. The two-loop recursion can be understood as that the InverseBFGS update is executed m times in a row on mathcal B^(0)_k using the tangent vectors widehats_iwidehaty_i, and in the same time the resulting operator mathcal B^LRBFGS_k is directly applied on operatornamegradf(x_k). When updating there are two cases: if there is still free memory, k m, the previously stored vector pairs widehats_iwidehaty_i have to be transported into the upcoming tangent space T_p_k+1mathcal M. If there is no free memory, the oldest pair widehats_iwidehaty_i has to be discarded and then all the remaining vector pairs widehats_iwidehaty_i are transported into the tangent space T_p_k+1mathcal M. After that the new values s_k = widehats_k = T^S_x_k α_k η_k(α_k η_k) and y_k = widehaty_k are stored at the beginning. This process ensures that new information about the objective function is always included and the old, probably no longer relevant, information is discarded.\n\nProvided functors\n\n(mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction\n(η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η\n\nFields\n\nmemory_s; the set of the stored (and transported) search directions times step size widehats_i_i=k-m^k-1.\nmemory_y: set of the stored gradient differences widehaty_i_i=k-m^k-1.\nξ: a variable used in the two-loop recursion.\nρ; a variable used in the two-loop recursion.\ninitial_scale: initial scaling of the Hessian\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nmessage: a string containing a potential warning that might have appeared\nproject!: a function to stabilize the update by projecting on the tangent space\n\nConstructor\n\nQuasiNewtonLimitedMemoryDirectionUpdate(\n M::AbstractManifold,\n x,\n update::AbstractQuasiNewtonUpdateRule,\n memory_size;\n initial_vector=zero_vector(M,x),\n initial_scale::Real=1.0\n project!=copyto!\n)\n\nSee also\n\nInverseBFGS QuasiNewtonCautiousDirectionUpdate AbstractQuasiNewtonDirectionUpdate\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.QuasiNewtonCautiousDirectionUpdate","page":"Quasi-Newton","title":"Manopt.QuasiNewtonCautiousDirectionUpdate","text":"QuasiNewtonCautiousDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate\n\nThese AbstractQuasiNewtonDirectionUpdates represent any quasi-Newton update rule, which are based on the idea of a so-called cautious update. The search direction is calculated as given in QuasiNewtonMatrixDirectionUpdate or QuasiNewtonLimitedMemoryDirectionUpdate, butut the update then is only executed if\n\nfracg_x_k+1(y_ks_k)lVert s_k rVert^2_x_k+1 θ(lVert operatornamegradf(x_k) rVert_x_k)\n\nis satisfied, where θ is a monotone increasing function satisfying θ(0) = 0 and θ is strictly increasing at 0. If this is not the case, the corresponding update is skipped, which means that for QuasiNewtonMatrixDirectionUpdate the matrix H_k or B_k is not updated. The basis b_i^n_i=1 is nevertheless transported into the upcoming tangent space T_x_k+1 mathcalM, and for QuasiNewtonLimitedMemoryDirectionUpdate neither the oldest vector pair widetildes_km widetildey_km is discarded nor the newest vector pair widetildes_k widetildey_k is added into storage, but all stored vector pairs widetildes_i widetildey_i_i=k-m^k-1 are transported into the tangent space T_x_k+1 mathcalM. If InverseBFGS or InverseBFGS is chosen as update, then the resulting method follows the method of [HAG18], taking into account that the corresponding step size is chosen.\n\nProvided functors\n\n(mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction\n(η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η\n\nFields\n\nupdate: an AbstractQuasiNewtonDirectionUpdate\nθ: a monotone increasing function satisfying θ(0) = 0 and θ is strictly increasing at 0.\n\nConstructor\n\nQuasiNewtonCautiousDirectionUpdate(U::QuasiNewtonMatrixDirectionUpdate; θ = identity)\nQuasiNewtonCautiousDirectionUpdate(U::QuasiNewtonLimitedMemoryDirectionUpdate; θ = identity)\n\nGenerate a cautious update for either a matrix based or a limited memory based update rule.\n\nSee also\n\nQuasiNewtonMatrixDirectionUpdate QuasiNewtonLimitedMemoryDirectionUpdate\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.initialize_update!","page":"Quasi-Newton","title":"Manopt.initialize_update!","text":"initialize_update!(s::AbstractQuasiNewtonDirectionUpdate)\n\nInitialize direction update. By default no change is made.\n\n\n\n\n\ninitialize_update!(d::QuasiNewtonLimitedMemoryDirectionUpdate)\n\nInitialize the limited memory direction update by emptying the memory buffers.\n\n\n\n\n\n","category":"function"},{"location":"solvers/quasi_Newton/#Hessian-update-rules","page":"Quasi-Newton","title":"Hessian update rules","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"Using","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"update_hessian!","category":"page"},{"location":"solvers/quasi_Newton/#Manopt.update_hessian!","page":"Quasi-Newton","title":"Manopt.update_hessian!","text":"update_hessian!(d::AbstractQuasiNewtonDirectionUpdate, amp, st, p_old, k)\n\nupdate the Hessian within the QuasiNewtonState st given a AbstractManoptProblem amp as well as the an AbstractQuasiNewtonDirectionUpdate d and the last iterate p_old. Note that the current (kth) iterate is already stored in get_iterate(st).\n\nSee also AbstractQuasiNewtonUpdateRule and its subtypes for the different rules that are available within d.\n\n\n\n\n\n","category":"function"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"the following update formulae for either H_k+1 or B_k+1 are available.","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"AbstractQuasiNewtonUpdateRule\nBFGS\nDFP\nBroyden\nSR1\nInverseBFGS\nInverseDFP\nInverseBroyden\nInverseSR1","category":"page"},{"location":"solvers/quasi_Newton/#Manopt.AbstractQuasiNewtonUpdateRule","page":"Quasi-Newton","title":"Manopt.AbstractQuasiNewtonUpdateRule","text":"AbstractQuasiNewtonUpdateRule\n\nSpecify a type for the different AbstractQuasiNewtonDirectionUpdates, that is for a QuasiNewtonMatrixDirectionUpdate there are several different updates to the matrix, while the default for QuasiNewtonLimitedMemoryDirectionUpdate the most prominent is InverseBFGS.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.BFGS","page":"Quasi-Newton","title":"Manopt.BFGS","text":"BFGS <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian BFGS update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeH_k^mathrmBFGS the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nH^mathrmBFGS_k+1 = widetildeH^mathrmBFGS_k + fracy_k y^mathrmT_k s^mathrmT_k y_k - fracwidetildeH^mathrmBFGS_k s_k s^mathrmT_k widetildeH^mathrmBFGS_k s^mathrmT_k widetildeH^mathrmBFGS_k s_k\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.DFP","page":"Quasi-Newton","title":"Manopt.DFP","text":"DFP <: AbstractQuasiNewtonUpdateRule\n\nindicates in an AbstractQuasiNewtonDirectionUpdate that the Riemannian DFP update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeH_k^mathrmDFP the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nH^mathrmDFP_k+1 = Bigl(\n mathrmid_T_x_k+1 mathcalM - fracy_k s^mathrmT_ks^mathrmT_k y_k\nBigr)\nwidetildeH^mathrmDFP_k\nBigl(\n mathrmid_T_x_k+1 mathcalM - fracs_k y^mathrmT_ks^mathrmT_k y_k\nBigr) + fracy_k y^mathrmT_ks^mathrmT_k y_k\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.Broyden","page":"Quasi-Newton","title":"Manopt.Broyden","text":"Broyden <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of BFGS and DFP.\n\nDenote by widetildeH_k^mathrmBr the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nH^mathrmBr_k+1 = widetildeH^mathrmBr_k\n - fracwidetildeH^mathrmBr_k s_k s^mathrmT_k widetildeH^mathrmBr_ks^mathrmT_k widetildeH^mathrmBr_k s_k + fracy_k y^mathrmT_ks^mathrmT_k y_k\n + φ_k s^mathrmT_k widetildeH^mathrmBr_k s_k\n Bigl(\n fracy_ks^mathrmT_k y_k - fracwidetildeH^mathrmBr_k s_ks^mathrmT_k widetildeH^mathrmBr_k s_k\n Bigr)\n Bigl(\n fracy_ks^mathrmT_k y_k - fracwidetildeH^mathrmBr_k s_ks^mathrmT_k widetildeH^mathrmBr_k s_k\n Bigr)^mathrmT\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively, and φ_k is the Broyden factor which is :constant by default but can also be set to :Davidon.\n\nConstructor\n\nBroyden(φ, update_rule::Symbol = :constant)\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.SR1","page":"Quasi-Newton","title":"Manopt.SR1","text":"SR1 <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian SR1 update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeH_k^mathrmSR1 the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nH^mathrmSR1_k+1 = widetildeH^mathrmSR1_k\n+ frac\n (y_k - widetildeH^mathrmSR1_k s_k) (y_k - widetildeH^mathrmSR1_k s_k)^mathrmT\n\n(y_k - widetildeH^mathrmSR1_k s_k)^mathrmT s_k\n\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\nThis method can be stabilized by only performing the update if denominator is larger than rlVert s_krVert_x_k+1lVert y_k - widetildeH^mathrmSR1_k s_k rVert_x_k+1 for some r0. For more details, see Section 6.2 in [NW06].\n\nConstructor\n\nSR1(r::Float64=-1.0)\n\nGenerate the SR1 update.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.InverseBFGS","page":"Quasi-Newton","title":"Manopt.InverseBFGS","text":"InverseBFGS <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian BFGS update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeB_k^mathrmBFGS the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nB^mathrmBFGS_k+1 = Bigl(\n mathrmid_T_x_k+1 mathcalM - fracs_k y^mathrmT_k s^mathrmT_k y_k\nBigr)\nwidetildeB^mathrmBFGS_k\nBigl(\n mathrmid_T_x_k+1 mathcalM - fracy_k s^mathrmT_k s^mathrmT_k y_k\nBigr) + fracs_k s^mathrmT_ks^mathrmT_k y_k\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.InverseDFP","page":"Quasi-Newton","title":"Manopt.InverseDFP","text":"InverseDFP <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian DFP update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeB_k^mathrmDFP the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nB^mathrmDFP_k+1 = widetildeB^mathrmDFP_k + fracs_k s^mathrmT_ks^mathrmT_k y_k\n - fracwidetildeB^mathrmDFP_k y_k y^mathrmT_k widetildeB^mathrmDFP_ky^mathrmT_k widetildeB^mathrmDFP_k y_k\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.InverseBroyden","page":"Quasi-Newton","title":"Manopt.InverseBroyden","text":"InverseBroyden <: AbstractQuasiNewtonUpdateRule\n\nIndicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of InverseBFGS and InverseDFP.\n\nDenote by widetildeH_k^mathrmBr the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nB^mathrmBr_k+1 = widetildeB^mathrmBr_k\n - fracwidetildeB^mathrmBr_k y_k y^mathrmT_k widetildeB^mathrmBr_ky^mathrmT_k widetildeB^mathrmBr_k y_k\n + fracs_k s^mathrmT_ks^mathrmT_k y_k\n + φ_k y^mathrmT_k widetildeB^mathrmBr_k y_k\n Bigl(\n fracs_ks^mathrmT_k y_k - fracwidetildeB^mathrmBr_k y_ky^mathrmT_k widetildeB^mathrmBr_k y_k\n Bigr) Bigl(\n fracs_ks^mathrmT_k y_k - fracwidetildeB^mathrmBr_k y_ky^mathrmT_k widetildeB^mathrmBr_k y_k\n Bigr)^mathrmT\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively, and φ_k is the Broyden factor which is :constant by default but can also be set to :Davidon.\n\nConstructor\n\nInverseBroyden(φ, update_rule::Symbol = :constant)\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.InverseSR1","page":"Quasi-Newton","title":"Manopt.InverseSR1","text":"InverseSR1 <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian SR1 update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeB_k^mathrmSR1 the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nB^mathrmSR1_k+1 = widetildeB^mathrmSR1_k\n+ frac\n (s_k - widetildeB^mathrmSR1_k y_k) (s_k - widetildeB^mathrmSR1_k y_k)^mathrmT\n\n (s_k - widetildeB^mathrmSR1_k y_k)^mathrmT y_k\n\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\nThis method can be stabilized by only performing the update if denominator is larger than rlVert y_krVert_x_k+1lVert s_k - widetildeH^mathrmSR1_k y_k rVert_x_k+1 for some r0. For more details, see Section 6.2 in [NW06].\n\nConstructor\n\nInverseSR1(r::Float64=-1.0)\n\nGenerate the InverseSR1.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#State","page":"Quasi-Newton","title":"State","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"The quasi Newton algorithm is based on a DefaultManoptProblem.","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"QuasiNewtonState","category":"page"},{"location":"solvers/quasi_Newton/#Manopt.QuasiNewtonState","page":"Quasi-Newton","title":"Manopt.QuasiNewtonState","text":"QuasiNewtonState <: AbstractManoptSolverState\n\nThe AbstractManoptSolverState represent any quasi-Newton based method and stores all necessary fields.\n\nFields\n\ndirection_update: an AbstractQuasiNewtonDirectionUpdate rule.\nη: the current update direction\nnondescent_direction_behavior: a Symbol to specify how to handle direction that are not descent ones.\nnondescent_direction_value: the value from the last inner product from checking for descent directions\np::P: a point on the manifold mathcal Mstoring the current iterate\np_old: the last iterate\nsk: the current step\nyk: the current gradient difference\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\nX_old: the last gradient\n\nConstructor\n\nQuasiNewtonState(M::AbstractManifold, p; kwargs...)\n\nGenerate the Quasi Newton state on the manifold M with start point p.\n\nKeyword arguments\n\ndirection_update=QuasiNewtonLimitedMemoryDirectionUpdate(M, p, InverseBFGS(), 20; vector_transport_method=vector_transport_method)\nstopping_criterion=[StopAfterIteration9(@ref)(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=default_stepsize(M, QuasiNewtonState): a functor inheriting from Stepsize to determine a step size\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nSee also\n\nquasi_Newton\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#sec-qn-technical-details","page":"Quasi-Newton","title":"Technical details","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"The quasi_Newton solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= or vector_transport_method_dual= (for mathcal N) does not have to be specified.\nBy default quasi Newton uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).\nthe norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nA copyto!(M, q, p) and copy(M,p) for points and similarly copy(M, p, X) for tangent vectors.\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"Most Hessian approximations further require get_coordinates(M, p, X, b) with respect to the AbstractBasis b provided, which is DefaultOrthonormalBasis by default from the basis= keyword.","category":"page"},{"location":"solvers/quasi_Newton/#Literature","page":"Quasi-Newton","title":"Literature","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"W. Huang, P.-A. Absil and K. A. Gallivan. A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems. SIAM Journal on Optimization 28, 470–495 (2018).\n\n\n\nW. Huang, K. A. Gallivan and P.-A. Absil. A Broyden class of quasi-Newton methods for Riemannian optimization. SIAM Journal on Optimization 25, 1660–1685 (2015).\n\n\n\nJ. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).\n\n\n\n","category":"page"},{"location":"solvers/NelderMead/#sec-nelder-meadSolver","page":"Nelder–Mead","title":"Nelder Mead method","text":"","category":"section"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":" NelderMead\n NelderMead!","category":"page"},{"location":"solvers/NelderMead/#Manopt.NelderMead","page":"Nelder–Mead","title":"Manopt.NelderMead","text":"NelderMead(M::AbstractManifold, f, population=NelderMeadSimplex(M))\nNelderMead(M::AbstractManifold, mco::AbstractManifoldCostObjective, population=NelderMeadSimplex(M))\nNelderMead!(M::AbstractManifold, f, population)\nNelderMead!(M::AbstractManifold, mco::AbstractManifoldCostObjective, population)\n\nSolve a Nelder-Mead minimization problem for the cost function f mathcal M ℝ on the manifold M. If the initial NelderMeadSimplex is not provided, a random set of points is chosen. The compuation can be performed in-place of the population.\n\nThe algorithm consists of the following steps. Let d denote the dimension of the manifold mathcal M.\n\nOrder the simplex vertices p_i i=1d+1 by increasing cost, such that we have f(p_1) f(p_2) f(p_d+1).\nCompute the Riemannian center of mass [Kar77], cf. mean, p_textm of the simplex vertices p_1p_d+1.\nReflect the point with the worst point at the mean p_textr = operatornameretr_p_textmbigl( - αoperatornameretr^-1_p_textm (p_d+1) bigr) If f(p_1) f(p_textr) f(p_d) then set p_d+1 = p_textr and go to step 1.\nExpand the simplex if f(p_textr) f(p_1) by computing the expantion point p_texte = operatornameretr_p_textmbigl( - γαoperatornameretr^-1_p_textm (p_d+1) bigr), which in this formulation allows to reuse the tangent vector from the inverse retraction from before. If f(p_texte) f(p_textr) then set p_d+1 = p_texte otherwise set set p_d+1 = p_textr. Then go to Step 1.\nContract the simplex if f(p_textr) f(p_d).\nIf f(p_textr) f(p_d+1) set the step s = -ρ\notherwise set s=ρ.\nCompute the contraction point p_textc = operatornameretr_p_textmbigl(soperatornameretr^-1_p_textm p_d+1 bigr).\nin this case if f(p_textc) f(p_textr) set p_d+1 = p_textc and go to step 1\nin this case if f(p_textc) f(p_d+1) set p_d+1 = p_textc and go to step 1\nShrink all points (closer to p_1). For all i=2d+1 set p_i = operatornameretr_p_1bigl( σoperatornameretr^-1_p_1 p_i bigr)\n\nFor more details, see The Euclidean variant in the Wikipedia https://en.wikipedia.org/wiki/Nelder-Mead_method or Algorithm 4.1 in http://www.optimization-online.org/DB_FILE/2007/08/1742.pdf.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\npopulation::NelderMeadSimplex=NelderMeadSimplex(M): an initial simplex of d+1 points, where d is the manifold_dimension of M.\n\nKeyword arguments\n\nstopping_criterion=StopAfterIteration(2000)|StopWhenPopulationConcentrated()): a functor indicating that the stopping criterion is fulfilled a StoppingCriterion\nα=1.0: reflection parameter α 0:\nγ=2.0 expansion parameter γ:\nρ=1/2: contraction parameter, 0 ρ frac12,\nσ=1/2: shrink coefficient, 0 σ 1\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/NelderMead/#Manopt.NelderMead!","page":"Nelder–Mead","title":"Manopt.NelderMead!","text":"NelderMead(M::AbstractManifold, f, population=NelderMeadSimplex(M))\nNelderMead(M::AbstractManifold, mco::AbstractManifoldCostObjective, population=NelderMeadSimplex(M))\nNelderMead!(M::AbstractManifold, f, population)\nNelderMead!(M::AbstractManifold, mco::AbstractManifoldCostObjective, population)\n\nSolve a Nelder-Mead minimization problem for the cost function f mathcal M ℝ on the manifold M. If the initial NelderMeadSimplex is not provided, a random set of points is chosen. The compuation can be performed in-place of the population.\n\nThe algorithm consists of the following steps. Let d denote the dimension of the manifold mathcal M.\n\nOrder the simplex vertices p_i i=1d+1 by increasing cost, such that we have f(p_1) f(p_2) f(p_d+1).\nCompute the Riemannian center of mass [Kar77], cf. mean, p_textm of the simplex vertices p_1p_d+1.\nReflect the point with the worst point at the mean p_textr = operatornameretr_p_textmbigl( - αoperatornameretr^-1_p_textm (p_d+1) bigr) If f(p_1) f(p_textr) f(p_d) then set p_d+1 = p_textr and go to step 1.\nExpand the simplex if f(p_textr) f(p_1) by computing the expantion point p_texte = operatornameretr_p_textmbigl( - γαoperatornameretr^-1_p_textm (p_d+1) bigr), which in this formulation allows to reuse the tangent vector from the inverse retraction from before. If f(p_texte) f(p_textr) then set p_d+1 = p_texte otherwise set set p_d+1 = p_textr. Then go to Step 1.\nContract the simplex if f(p_textr) f(p_d).\nIf f(p_textr) f(p_d+1) set the step s = -ρ\notherwise set s=ρ.\nCompute the contraction point p_textc = operatornameretr_p_textmbigl(soperatornameretr^-1_p_textm p_d+1 bigr).\nin this case if f(p_textc) f(p_textr) set p_d+1 = p_textc and go to step 1\nin this case if f(p_textc) f(p_d+1) set p_d+1 = p_textc and go to step 1\nShrink all points (closer to p_1). For all i=2d+1 set p_i = operatornameretr_p_1bigl( σoperatornameretr^-1_p_1 p_i bigr)\n\nFor more details, see The Euclidean variant in the Wikipedia https://en.wikipedia.org/wiki/Nelder-Mead_method or Algorithm 4.1 in http://www.optimization-online.org/DB_FILE/2007/08/1742.pdf.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\npopulation::NelderMeadSimplex=NelderMeadSimplex(M): an initial simplex of d+1 points, where d is the manifold_dimension of M.\n\nKeyword arguments\n\nstopping_criterion=StopAfterIteration(2000)|StopWhenPopulationConcentrated()): a functor indicating that the stopping criterion is fulfilled a StoppingCriterion\nα=1.0: reflection parameter α 0:\nγ=2.0 expansion parameter γ:\nρ=1/2: contraction parameter, 0 ρ frac12,\nσ=1/2: shrink coefficient, 0 σ 1\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/NelderMead/#State","page":"Nelder–Mead","title":"State","text":"","category":"section"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":" NelderMeadState","category":"page"},{"location":"solvers/NelderMead/#Manopt.NelderMeadState","page":"Nelder–Mead","title":"Manopt.NelderMeadState","text":"NelderMeadState <: AbstractManoptSolverState\n\nDescribes all parameters and the state of a Nelder-Mead heuristic based optimization algorithm.\n\nFields\n\nThe naming of these parameters follows the Wikipedia article of the Euclidean case. The default is given in brackets, the required value range after the description\n\npopulation::NelderMeadSimplex: a population (set) of d+1 points x_i, i=1n+1, where d is the manifold_dimension of M.\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nα: the reflection parameter α 0:\nγ the expansion parameter γ 0:\nρ: the contraction parameter, 0 ρ frac12,\nσ: the shrinkage coefficient, 0 σ 1\np::P: a point on the manifold mathcal M storing the current best point\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\n\nConstructors\n\nNelderMeadState(M::AbstractManifold; kwargs...)\n\nConstruct a Nelder-Mead Option with a default population (if not provided) of set of dimension(M)+1 random points stored in NelderMeadSimplex.\n\nKeyword arguments\n\npopulation=NelderMeadSimplex(M)\nstopping_criterion=StopAfterIteration(2000)|StopWhenPopulationConcentrated()): a functor indicating that the stopping criterion is fulfilled a StoppingCriterion\nα=1.0: reflection parameter α 0:\nγ=2.0 expansion parameter γ:\nρ=1/2: contraction parameter, 0 ρ frac12,\nσ=1/2: shrink coefficient, 0 σ 1\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\np=copy(M, population.pts[1]): initialise the storage for the best point (iterate)¨\n\n\n\n\n\n","category":"type"},{"location":"solvers/NelderMead/#Simplex","page":"Nelder–Mead","title":"Simplex","text":"","category":"section"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":"NelderMeadSimplex","category":"page"},{"location":"solvers/NelderMead/#Manopt.NelderMeadSimplex","page":"Nelder–Mead","title":"Manopt.NelderMeadSimplex","text":"NelderMeadSimplex\n\nA simplex for the Nelder-Mead algorithm.\n\nConstructors\n\nNelderMeadSimplex(M::AbstractManifold)\n\nConstruct a simplex using d+1 random points from manifold M, where d is the manifold_dimension of M.\n\nNelderMeadSimplex(\n M::AbstractManifold,\n p,\n B::AbstractBasis=DefaultOrthonormalBasis();\n a::Real=0.025,\n retraction_method::AbstractRetractionMethod=default_retraction_method(M, typeof(p)),\n)\n\nConstruct a simplex from a basis B with one point being p and other points constructed by moving by a in each principal direction defined by basis B of the tangent space at point p using retraction retraction_method. This works similarly to how the initial simplex is constructed in the Euclidean Nelder-Mead algorithm, just in the tangent space at point p.\n\n\n\n\n\n","category":"type"},{"location":"solvers/NelderMead/#Additional-stopping-criteria","page":"Nelder–Mead","title":"Additional stopping criteria","text":"","category":"section"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":"StopWhenPopulationConcentrated","category":"page"},{"location":"solvers/NelderMead/#Manopt.StopWhenPopulationConcentrated","page":"Nelder–Mead","title":"Manopt.StopWhenPopulationConcentrated","text":"StopWhenPopulationConcentrated <: StoppingCriterion\n\nA stopping criterion for NelderMead to indicate to stop when both\n\nthe maximal distance of the first to the remaining the cost values and\nthe maximal distance of the first to the remaining the population points\n\ndrops below a certain tolerance tol_f and tol_p, respectively.\n\nConstructor\n\nStopWhenPopulationConcentrated(tol_f::Real=1e-8, tol_x::Real=1e-8)\n\n\n\n\n\n","category":"type"},{"location":"solvers/NelderMead/#Technical-details","page":"Nelder–Mead","title":"Technical details","text":"","category":"section"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":"The NelderMead solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= does not have to be specified.\nThe distance(M, p, q) when using the default stopping criterion, which includes StopWhenPopulationConcentrated.\nWithin the default initialization rand(M) is used to generate the initial population\nA mean(M, population) has to be available, for example by loading Manifolds.jl and its statistics tools","category":"page"}] +[{"location":"notation/#Notation","page":"Notation","title":"Notation","text":"","category":"section"},{"location":"notation/","page":"Notation","title":"Notation","text":"In this package,the notation introduced in Manifolds.jl Notation is used with the following additional parts.","category":"page"},{"location":"notation/","page":"Notation","title":"Notation","text":"Symbol Description Also used Comment\noperatornameargmin argument of a function f where a local or global minimum is attained \nk the current iterate ì the goal is to unify this to k\n The Levi-Cevita connection ","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#Using-Automatic-Differentiation-in-Manopt.jl","page":"Use automatic differentiation","title":"Using Automatic Differentiation in Manopt.jl","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Since Manifolds.jl 0.7, the support of automatic differentiation support has been extended.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"This tutorial explains how to use Euclidean tools to derive a gradient for a real-valued function f mathcal M ℝ. Two methods are considered: an intrinsic variant and a variant employing the embedding. These gradients can then be used within any gradient based optimization algorithm in Manopt.jl.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"While by default FiniteDifferences.jlare used, one can also use FiniteDiff.jl, ForwardDiff.jl, ReverseDiff.jl, or Zygote.jl.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"This tutorial looks at a few possibilities to approximate or derive the gradient of a function fmathcal M ℝ on a Riemannian manifold, without computing it yourself. There are mainly two different philosophies:","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Working intrinsically, that is staying on the manifold and in the tangent spaces, considering to approximate the gradient by forward differences.\nWorking in an embedding where all tools from functions on Euclidean spaces can be used, like finite differences or automatic differentiation, and then compute the corresponding Riemannian gradient from there.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"First, load all necessary packages","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"using Manopt, Manifolds, Random, LinearAlgebra\nusing FiniteDifferences, ManifoldDiff\nRandom.seed!(42);","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#1.-(Intrinsic)-forward-differences","page":"Use automatic differentiation","title":"1. (Intrinsic) forward differences","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"A first idea is to generalize (multivariate) finite differences to Riemannian manifolds. Let X_1ldotsX_d T_pmathcal M denote an orthonormal basis of the tangent space T_pmathcal M at the point pmathcal M on the Riemannian manifold.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The notion of a directional derivative is generalized to a “direction” YT_pmathcal M. Let c -εε, ε0, be a curve with c(0) = p, dot c(0) = Y, for example c(t)= exp_p(tY). This yields","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":" Df(p)Y = left fracddt right_t=0 f(c(t)) = lim_t 0 frac1t(f(exp_p(tY))-f(p))","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The differential Df(p)X is approximated by a finite difference scheme for an h0 as","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"DF(p)Y G_h(Y) = frac1h(f(exp_p(hY))-f(p))","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Furthermore the gradient operatornamegradf is the Riesz representer of the differential:","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":" Df(p)Y = g_p(operatornamegradf(p) Y)qquad text for all Y T_pmathcal M","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"and since it is a tangent vector, we can write it in terms of a basis as","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":" operatornamegradf(p) = sum_i=1^d g_p(operatornamegradf(p)X_i)X_i\n = sum_i=1^d Df(p)X_iX_i","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"and perform the approximation from before to obtain","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":" operatornamegradf(p) sum_i=1^d G_h(X_i)X_i","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"for some suitable step size h. This comes at the cost of d+1 function evaluations and d exponential maps.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"This is the first variant we can use. An advantage is that it is intrinsic in the sense that it does not require any embedding of the manifold.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#An-example:-the-Rayleigh-quotient","page":"Use automatic differentiation","title":"An example: the Rayleigh quotient","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The Rayleigh quotient is concerned with finding eigenvalues (and eigenvectors) of a symmetric matrix A ℝ^(n+1)(n+1). The optimization problem reads","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"F ℝ^n+1 ℝquad F(mathbf x) = fracmathbf x^mathrmTAmathbf xmathbf x^mathrmTmathbf x","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Minimizing this function yields the smallest eigenvalue lambda_1 as a value and the corresponding minimizer mathbf x^* is a corresponding eigenvector.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Since the length of an eigenvector is irrelevant, there is an ambiguity in the cost function. It can be better phrased on the sphere $ 𝕊^n$ of unit vectors in ℝ^n+1,","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"operatorname*argmin_p 𝕊^n f(p) = operatorname*argmin_ p 𝕊^n p^mathrmTAp","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We can compute the Riemannian gradient exactly as","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"operatornamegrad f(p) = 2(Ap - pp^mathrmTAp)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"so we can compare it to the approximation by finite differences.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"n = 200\nA = randn(n + 1, n + 1)\nA = Symmetric(A)\nM = Sphere(n);\n\nf1(p) = p' * A'p\ngradf1(p) = 2 * (A * p - p * p' * A * p)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"gradf1 (generic function with 1 method)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Manifolds provides a finite difference scheme in tangent spaces, that you can introduce to use an existing framework (if the wrapper is implemented) form Euclidean space. Here we use FiniteDiff.jl.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"r_backend = ManifoldDiff.TangentDiffBackend(\n ManifoldDiff.FiniteDifferencesBackend()\n)\ngradf1_FD(p) = ManifoldDiff.gradient(M, f1, p, r_backend)\n\np = zeros(n + 1)\np[1] = 1.0\nX1 = gradf1(p)\nX2 = gradf1_FD(p)\nnorm(M, p, X1 - X2)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"1.018153081967174e-12","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We obtain quite a good approximation of the gradient.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#EmbeddedGradient","page":"Use automatic differentiation","title":"2. Conversion of a Euclidean Gradient in the Embedding to a Riemannian Gradient of a (not Necessarily Isometrically) Embedded Manifold","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Let tilde f ℝ^m ℝ be a function on the embedding of an n-dimensional manifold mathcal M subset ℝ^mand let f mathcal M ℝ denote the restriction of tilde f to the manifold mathcal M.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Since we can use the pushforward of the embedding to also embed the tangent space T_pmathcal M, pmathcal M, we can similarly obtain the differential Df(p) T_pmathcal M ℝ by restricting the differential Dtilde f(p) to the tangent space.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"If both T_pmathcal M and T_pℝ^m have the same inner product, or in other words the manifold is isometrically embedded in ℝ^m (like for example the sphere mathbb S^nsubsetℝ^m+1), then this restriction of the differential directly translates to a projection of the gradient","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"operatornamegradf(p) = operatornameProj_T_pmathcal M(operatornamegrad tilde f(p))","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"More generally take a change of the metric into account as","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"langle operatornameProj_T_pmathcal M(operatornamegrad tilde f(p)) X rangle\n= Df(p)X = g_p(operatornamegradf(p) X)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"or in words: we have to change the Riesz representer of the (restricted/projected) differential of f (tilde f) to the one with respect to the Riemannian metric. This is done using change_representer.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#A-continued-example","page":"Use automatic differentiation","title":"A continued example","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We continue with the Rayleigh Quotient from before, now just starting with the definition of the Euclidean case in the embedding, the function F.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"F(x) = x' * A * x / (x' * x);","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The cost function is the same by restriction","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"f2(M, p) = F(p);","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The gradient is now computed combining our gradient scheme with FiniteDifferences.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"function grad_f2_AD(M, p)\n return Manifolds.gradient(\n M, F, p, Manifolds.RiemannianProjectionBackend(ManifoldDiff.FiniteDifferencesBackend())\n )\nend\nX3 = grad_f2_AD(M, p)\nnorm(M, p, X1 - X3)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"1.742525831800539e-12","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#An-example-for-a-non-isometrically-embedded-manifold","page":"Use automatic differentiation","title":"An example for a non-isometrically embedded manifold","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"on the manifold mathcal P(3) of symmetric positive definite matrices.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"The following function computes (half) the distance squared (with respect to the linear affine metric) on the manifold mathcal P(3) to the identity matrix I_3. Denoting the unit matrix we consider the function","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":" G(q)\n = frac12d^2_mathcal P(3)(qI_3)\n = lVert operatornameLog(q) rVert_F^2","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"where operatornameLog denotes the matrix logarithm and lVert cdot rVert_F is the Frobenius norm. This can be computed for symmetric positive definite matrices by summing the squares of the logarithms of the eigenvalues of q and dividing by two:","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"G(q) = sum(log.(eigvals(Symmetric(q))) .^ 2) / 2","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"G (generic function with 1 method)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We can also interpret this as a function on the space of matrices and apply the Euclidean finite differences machinery; in this way we can easily derive the Euclidean gradient. But when computing the Riemannian gradient, we have to change the representer (see again change_representer) after projecting onto the tangent space T_pmathcal P(n) at p.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Let’s first define a point and the manifold N=mathcal P(3).","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"rotM(α) = [1.0 0.0 0.0; 0.0 cos(α) sin(α); 0.0 -sin(α) cos(α)]\nq = rotM(π / 6) * [1.0 0.0 0.0; 0.0 2.0 0.0; 0.0 0.0 3.0] * transpose(rotM(π / 6))\nN = SymmetricPositiveDefinite(3)\nis_point(N, q)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"true","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We could first just compute the gradient using FiniteDifferences.jl, but this yields the Euclidean gradient:","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"FiniteDifferences.grad(central_fdm(5, 1), G, q)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"([3.240417492806275e-14 -2.3531899864903462e-14 0.0; 0.0 0.3514812167654708 0.017000516835452926; 0.0 0.0 0.36129646973723023],)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Instead, we use the RiemannianProjectedBackend of Manifolds.jl, which in this case internally uses FiniteDifferences.jl to compute a Euclidean gradient but then uses the conversion explained before to derive the Riemannian gradient.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"We define this here again as a function grad_G_FD that could be used in the Manopt.jl framework within a gradient based optimization.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"function grad_G_FD(N, q)\n return Manifolds.gradient(\n N, G, q, ManifoldDiff.RiemannianProjectionBackend(ManifoldDiff.FiniteDifferencesBackend())\n )\nend\nG1 = grad_G_FD(N, q)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"3×3 Matrix{Float64}:\n 3.24042e-14 -2.64734e-14 -5.09481e-15\n -2.64734e-14 1.86368 0.826856\n -5.09481e-15 0.826856 2.81845","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Now, we can again compare this to the (known) solution of the gradient, namely the gradient of (half of) the distance squared G(q) = frac12d^2_mathcal P(3)(qI_3) is given by operatornamegrad G(q) = -operatornamelog_q I_3, where operatornamelog is the logarithmic map on the manifold.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"G2 = -log(N, q, Matrix{Float64}(I, 3, 3))","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"3×3 Matrix{Float64}:\n -0.0 -0.0 -0.0\n -0.0 1.86368 0.826856\n -0.0 0.826856 2.81845","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Both terms agree up to 1810^-12:","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"norm(G1 - G2)\nisapprox(M, q, G1, G2; atol=2 * 1e-12)","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"true","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#Summary","page":"Use automatic differentiation","title":"Summary","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"This tutorial illustrates how to use tools from Euclidean spaces, finite differences or automatic differentiation, to compute gradients on Riemannian manifolds. The scheme allows to use any differentiation framework within the embedding to derive a Riemannian gradient.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/#Technical-details","page":"Use automatic differentiation","title":"Technical details","text":"","category":"section"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/AutomaticDifferentiation/","page":"Use automatic differentiation","title":"Use automatic differentiation","text":"2024-11-21T20:36:03.876","category":"page"},{"location":"solvers/proximal_point/#Proximal-point-method","page":"Proximal point method","title":"Proximal point method","text":"","category":"section"},{"location":"solvers/proximal_point/","page":"Proximal point method","title":"Proximal point method","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/proximal_point/","page":"Proximal point method","title":"Proximal point method","text":"proximal_point\nproximal_point!","category":"page"},{"location":"solvers/proximal_point/#Manopt.proximal_point","page":"Proximal point method","title":"Manopt.proximal_point","text":"proximal_point(M, prox_f, p=rand(M); kwargs...)\nproximal_point(M, mpmo, p=rand(M); kwargs...)\nproximal_point!(M, prox_f, p; kwargs...)\nproximal_point!(M, mpmo, p; kwargs...)\n\nPerform the proximal point algoritm from [FO02] which reads\n\np^(k+1) = operatornameprox_λ_kf(p^(k))\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nprox_f: a proximal map (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of f (see evaluation)\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nf=nothing: a cost function f mathcal Mℝ to minimize. For running the algorithm, f is not required, but for example when recording the cost or using a stopping criterion that requires a cost function.\nλ= k -> 1.0: a function returning the (square summable but not summable) sequence of λ_i\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/proximal_point/#Manopt.proximal_point!","page":"Proximal point method","title":"Manopt.proximal_point!","text":"proximal_point(M, prox_f, p=rand(M); kwargs...)\nproximal_point(M, mpmo, p=rand(M); kwargs...)\nproximal_point!(M, prox_f, p; kwargs...)\nproximal_point!(M, mpmo, p; kwargs...)\n\nPerform the proximal point algoritm from [FO02] which reads\n\np^(k+1) = operatornameprox_λ_kf(p^(k))\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nprox_f: a proximal map (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of f (see evaluation)\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nf=nothing: a cost function f mathcal Mℝ to minimize. For running the algorithm, f is not required, but for example when recording the cost or using a stopping criterion that requires a cost function.\nλ= k -> 1.0: a function returning the (square summable but not summable) sequence of λ_i\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/proximal_point/#State","page":"Proximal point method","title":"State","text":"","category":"section"},{"location":"solvers/proximal_point/","page":"Proximal point method","title":"Proximal point method","text":"ProximalPointState","category":"page"},{"location":"solvers/proximal_point/#Manopt.ProximalPointState","page":"Proximal point method","title":"Manopt.ProximalPointState","text":"ProximalPointState{P} <: AbstractGradientSolverState\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nλ: a function for the values of λ_k per iteration(cycle k\n\nConstructor\n\nProximalPointState(M::AbstractManifold; kwargs...)\n\nInitialize the proximal point method solver state, where\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\n\nKeyword arguments\n\nλ=k -> 1.0 a function to compute the λ_k k mathcal N,\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopAfterIteration(100): a functor indicating that the stopping criterion is fulfilled\n\nSee also\n\nproximal_point\n\n\n\n\n\n","category":"type"},{"location":"solvers/proximal_point/","page":"Proximal point method","title":"Proximal point method","text":"O. Ferreira and P. R. Oliveira. Proximal point algorithm on Riemannian manifolds. Optimization. A Journal of Mathematical Programming and Operations Research 51, 257–270 (2002).\n\n\n\n","category":"page"},{"location":"solvers/conjugate_gradient_descent/#Conjugate-gradient-descent","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"","category":"section"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"conjugate_gradient_descent\nconjugate_gradient_descent!","category":"page"},{"location":"solvers/conjugate_gradient_descent/#Manopt.conjugate_gradient_descent","page":"Conjugate gradient descent","title":"Manopt.conjugate_gradient_descent","text":"conjugate_gradient_descent(M, f, grad_f, p=rand(M))\nconjugate_gradient_descent!(M, f, grad_f, p)\nconjugate_gradient_descent(M, gradient_objective, p)\nconjugate_gradient_descent!(M, gradient_objective, p; kwargs...)\n\nperform a conjugate gradient based descent-\n\np_k+1 = operatornameretr_p_k bigl( s_kδ_k bigr)\n\nwhere operatornameretr denotes a retraction on the Manifold M and one can employ different rules to update the descent direction δ_k based on the last direction δ_k-1 and both gradients operatornamegradf(x_k),operatornamegrad f(x_k-1). The Stepsize s_k may be determined by a Linesearch.\n\nAlternatively to f and grad_f you can provide the AbstractManifoldGradientObjective gradient_objective directly.\n\nAvailable update rules are SteepestDescentCoefficientRule, which yields a gradient_descent, ConjugateDescentCoefficient (the default), DaiYuanCoefficientRule, FletcherReevesCoefficient, HagerZhangCoefficient, HestenesStiefelCoefficient, LiuStoreyCoefficient, and PolakRibiereCoefficient. These can all be combined with a ConjugateGradientBealeRestartRule rule.\n\nThey all compute β_k such that this algorithm updates the search direction as\n\nδ_k=operatornamegradf(p_k) + β_k delta_k-1\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\ncoefficient::DirectionUpdateRule=ConjugateDescentCoefficient(): rule to compute the descent direction update coefficient β_k, as a functor, where the resulting function maps are (amp, cgs, k) -> β with amp an AbstractManoptProblem, cgs is the ConjugateGradientDescentState, and k is the current iterate.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/conjugate_gradient_descent/#Manopt.conjugate_gradient_descent!","page":"Conjugate gradient descent","title":"Manopt.conjugate_gradient_descent!","text":"conjugate_gradient_descent(M, f, grad_f, p=rand(M))\nconjugate_gradient_descent!(M, f, grad_f, p)\nconjugate_gradient_descent(M, gradient_objective, p)\nconjugate_gradient_descent!(M, gradient_objective, p; kwargs...)\n\nperform a conjugate gradient based descent-\n\np_k+1 = operatornameretr_p_k bigl( s_kδ_k bigr)\n\nwhere operatornameretr denotes a retraction on the Manifold M and one can employ different rules to update the descent direction δ_k based on the last direction δ_k-1 and both gradients operatornamegradf(x_k),operatornamegrad f(x_k-1). The Stepsize s_k may be determined by a Linesearch.\n\nAlternatively to f and grad_f you can provide the AbstractManifoldGradientObjective gradient_objective directly.\n\nAvailable update rules are SteepestDescentCoefficientRule, which yields a gradient_descent, ConjugateDescentCoefficient (the default), DaiYuanCoefficientRule, FletcherReevesCoefficient, HagerZhangCoefficient, HestenesStiefelCoefficient, LiuStoreyCoefficient, and PolakRibiereCoefficient. These can all be combined with a ConjugateGradientBealeRestartRule rule.\n\nThey all compute β_k such that this algorithm updates the search direction as\n\nδ_k=operatornamegradf(p_k) + β_k delta_k-1\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\ncoefficient::DirectionUpdateRule=ConjugateDescentCoefficient(): rule to compute the descent direction update coefficient β_k, as a functor, where the resulting function maps are (amp, cgs, k) -> β with amp an AbstractManoptProblem, cgs is the ConjugateGradientDescentState, and k is the current iterate.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/conjugate_gradient_descent/#State","page":"Conjugate gradient descent","title":"State","text":"","category":"section"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"ConjugateGradientDescentState","category":"page"},{"location":"solvers/conjugate_gradient_descent/#Manopt.ConjugateGradientDescentState","page":"Conjugate gradient descent","title":"Manopt.ConjugateGradientDescentState","text":"ConjugateGradientState <: AbstractGradientSolverState\n\nspecify options for a conjugate gradient descent algorithm, that solves a [DefaultManoptProblem].\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal M\nδ: the current descent direction, also a tangent vector\nβ: the current update coefficient rule, see .\ncoefficient: function to determine the new β\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nConjugateGradientState(M::AbstractManifold; kwargs...)\n\nwhere the last five fields can be set by their names as keyword and the X can be set to a tangent vector type using the keyword initial_gradient which defaults to zero_vector(M,p), and δ is initialized to a copy of this vector.\n\nKeyword arguments\n\nThe following fields from above β_k to compute the conjugate gradient update coefficient based on a restart idea of [Bea72], following [HZ06, page 12] adapted to manifolds.\n\nFields\n\ndirection_update::DirectionUpdateRule: the actual rule, that is restarted\nthreshold::Real: a threshold for the restart check.\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nConjugateGradientBealeRestartRule(\n direction_update::Union{DirectionUpdateRule,ManifoldDefaultsFactory};\n kwargs...\n)\nConjugateGradientBealeRestartRule(\n M::AbstractManifold=DefaultManifold(),\n direction_update::Union{DirectionUpdateRule,ManifoldDefaultsFactory};\n kwargs...\n)\n\nConstruct the Beale restart coefficient update rule adapted to manifolds.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M If this is not provided, the DefaultManifold() from ManifoldsBase.jl is used.\ndirection_update: a DirectionUpdateRule or a corresponding ManifoldDefaultsFactory to produce such a rule.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nthreshold=0.2\n\nSee also\n\nConjugateGradientBealeRestart, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.DaiYuanCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.DaiYuanCoefficientRule","text":"DaiYuanCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [DY99] adapted to manifolds\n\nFields\n\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nDaiYuanCoefficientRule(M::AbstractManifold; kwargs...)\n\nConstruct the Dai—Yuan coefficient update rule.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nDaiYuanCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.FletcherReevesCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.FletcherReevesCoefficientRule","text":"FletcherReevesCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [FR64] adapted to manifolds\n\nConstructor\n\nFletcherReevesCoefficientRule()\n\nConstruct the Fletcher—Reeves coefficient update rule.\n\nSee also\n\nFletcherReevesCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.HagerZhangCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.HagerZhangCoefficientRule","text":"HagerZhangCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [HZ05] adapted to manifolds\n\nFields\n\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nHagerZhangCoefficientRule(M::AbstractManifold; kwargs...)\n\nConstruct the Hager-Zang coefficient update rule based on [HZ05] adapted to manifolds.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nHagerZhangCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.HestenesStiefelCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.HestenesStiefelCoefficientRule","text":"HestenesStiefelCoefficientRuleRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [HS52] adapted to manifolds\n\nFields\n\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nHestenesStiefelCoefficientRuleRule(M::AbstractManifold; kwargs...)\n\nConstruct the Hestenes-Stiefel coefficient update rule based on [HS52] adapted to manifolds.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nHestenesStiefelCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.LiuStoreyCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.LiuStoreyCoefficientRule","text":"LiuStoreyCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [LS91] adapted to manifolds\n\nFields\n\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nLiuStoreyCoefficientRule(M::AbstractManifold; kwargs...)\n\nConstruct the Lui-Storey coefficient update rule based on [LS91] adapted to manifolds.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nLiuStoreyCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.PolakRibiereCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.PolakRibiereCoefficientRule","text":"PolakRibiereCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [PR69] adapted to manifolds\n\nFields\n\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nPolakRibiereCoefficientRule(M::AbstractManifold; kwargs...)\n\nConstruct the Dai—Yuan coefficient update rule.\n\nKeyword arguments\n\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nPolakRibiereCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#Manopt.SteepestDescentCoefficientRule","page":"Conjugate gradient descent","title":"Manopt.SteepestDescentCoefficientRule","text":"SteepestDescentCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient to obtain the steepest direction, that is β_k=0.\n\nConstructor\n\nSteepestDescentCoefficientRule()\n\nConstruct the steepest descent coefficient update rule.\n\nSee also\n\nSteepestDescentCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_gradient_descent/#sec-cgd-technical-details","page":"Conjugate gradient descent","title":"Technical details","text":"","category":"section"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"The conjugate_gradient_descent solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= or vector_transport_method_dual= (for mathcal N) does not have to be specified.\nBy default gradient descent uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).\nBy default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/conjugate_gradient_descent/#Literature","page":"Conjugate gradient descent","title":"Literature","text":"","category":"section"},{"location":"solvers/conjugate_gradient_descent/","page":"Conjugate gradient descent","title":"Conjugate gradient descent","text":"E. M. Beale. A derivation of conjugate gradients. In: Numerical methods for nonlinear optimization, edited by F. A. Lootsma (Academic Press, London, London, 1972); pp. 39–43.\n\n\n\nY. H. Dai and Y. Yuan. A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM Journal on Optimization 10, 177–182 (1999).\n\n\n\nR. Fletcher. Practical Methods of Optimization. 2 Edition, A Wiley-Interscience Publication (John Wiley & Sons Ltd., 1987).\n\n\n\nR. Fletcher and C. M. Reeves. Function minimization by conjugate gradients. The Computer Journal 7, 149–154 (1964).\n\n\n\nW. W. Hager and H. Zhang. A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization 2, 35–58 (2006).\n\n\n\nW. W. Hager and H. Zhang. A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search. SIAM Journal on Optimization 16, 170–192 (2005).\n\n\n\nM. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards 49, 409 (1952).\n\n\n\nY. Liu and C. Storey. Efficient generalized conjugate gradient algorithms, part 1: Theory. Journal of Optimization Theory and Applications 69, 129–137 (1991).\n\n\n\nE. Polak and G. Ribière. Note sur la convergence de méthodes de directions conjuguées. Revue française d’informatique et de recherche opérationnelle 3, 35–43 (1969).\n\n\n\nM. J. Powell. Restart procedures for the conjugate gradient method. Mathematical Programming 12, 241–254 (1977).\n\n\n\n","category":"page"},{"location":"solvers/convex_bundle_method/#Convex-bundle-method","page":"Convex bundle method","title":"Convex bundle method","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"convex_bundle_method\nconvex_bundle_method!","category":"page"},{"location":"solvers/convex_bundle_method/#Manopt.convex_bundle_method","page":"Convex bundle method","title":"Manopt.convex_bundle_method","text":"convex_bundle_method(M, f, ∂f, p)\nconvex_bundle_method!(M, f, ∂f, p)\n\nperform a convex bundle method p^(k+1) = operatornameretr_p^(k)(-g_k) where\n\ng_k = sum_jin J_k λ_j^k mathrmP_p_kq_jX_q_j\n\nand p_k is the last serious iterate, X_q_j f(q_j), and the λ_j^k are solutions to the quadratic subproblem provided by the convex_bundle_method_subsolver.\n\nThough the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.\n\nFor more details, see [BHJ24].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\natol_λ=eps() : tolerance parameter for the convex coefficients in λ.\natol_errors=eps(): : tolerance parameter for the linearization errors.\nbundle_cap=25`\nm=1e-3: : the parameter to test the decrease of the cost: f(q_k+1) f(p_k) + m ξ.\ndiameter=50.0: estimate for the diameter of the level set of the objective function at the starting point.\ndomain=(M, p) -> isfinite(f(M, p)): a function to that evaluates to true when the current candidate is in the domain of the objective f, and false otherwise.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nk_max=0: upper bound on the sectional curvature of the manifold.\nstepsize=default_stepsize(M, ConvexBundleMethodState): a functor inheriting from Stepsize to determine a step size\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses* inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nsub_state=convex_bundle_method_subsolver`: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=AllocatingEvaluation: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/convex_bundle_method/#Manopt.convex_bundle_method!","page":"Convex bundle method","title":"Manopt.convex_bundle_method!","text":"convex_bundle_method(M, f, ∂f, p)\nconvex_bundle_method!(M, f, ∂f, p)\n\nperform a convex bundle method p^(k+1) = operatornameretr_p^(k)(-g_k) where\n\ng_k = sum_jin J_k λ_j^k mathrmP_p_kq_jX_q_j\n\nand p_k is the last serious iterate, X_q_j f(q_j), and the λ_j^k are solutions to the quadratic subproblem provided by the convex_bundle_method_subsolver.\n\nThough the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.\n\nFor more details, see [BHJ24].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\natol_λ=eps() : tolerance parameter for the convex coefficients in λ.\natol_errors=eps(): : tolerance parameter for the linearization errors.\nbundle_cap=25`\nm=1e-3: : the parameter to test the decrease of the cost: f(q_k+1) f(p_k) + m ξ.\ndiameter=50.0: estimate for the diameter of the level set of the objective function at the starting point.\ndomain=(M, p) -> isfinite(f(M, p)): a function to that evaluates to true when the current candidate is in the domain of the objective f, and false otherwise.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nk_max=0: upper bound on the sectional curvature of the manifold.\nstepsize=default_stepsize(M, ConvexBundleMethodState): a functor inheriting from Stepsize to determine a step size\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses* inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nsub_state=convex_bundle_method_subsolver`: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=AllocatingEvaluation: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/convex_bundle_method/#State","page":"Convex bundle method","title":"State","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"ConvexBundleMethodState","category":"page"},{"location":"solvers/convex_bundle_method/#Manopt.ConvexBundleMethodState","page":"Convex bundle method","title":"Manopt.ConvexBundleMethodState","text":"ConvexBundleMethodState <: AbstractManoptSolverState\n\nStores option values for a convex_bundle_method solver.\n\nFields\n\nTHe following fields require a (real) number type R, as well as point type P and a tangent vector type T`\n\natol_λ::R: tolerance parameter for the convex coefficients in λ\n`atol_errors::R: tolerance parameter for the linearization errors\nbundle<:AbstractVector{Tuple{<:P,<:T}}: bundle that collects each iterate with the computed subgradient at the iterate\nbundle_cap::Int: the maximal number of elements the bundle is allowed to remember\ndiameter::R: estimate for the diameter of the level set of the objective function at the starting point\ndomain: the domain offas a function(M,p) -> bthat evaluates to true when the current candidate is in the domain off`, and false otherwise,\ng::T: descent direction\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nk_max::R: upper bound on the sectional curvature of the manifold\nlinearization_errors<:AbstractVector{<:R}: linearization errors at the last serious step\nm::R: the parameter to test the decrease of the cost: f(q_k+1) f(p_k) + m ξ.\np::P: a point on the manifold mathcal Mstoring the current iterate\np_last_serious::P: last serious iterate\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\ntransported_subgradients: subgradients of the bundle that are transported to p_last_serious\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring a subgradient at the current iterate\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nε::R: convex combination of the linearization errors\nλ:::AbstractVector{<:R}: convex coefficients from the slution of the subproblem\nξ: the stopping parameter given by ξ = -lVert grvert^2 ε\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nConstructor\n\nConvexBundleMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)\nConvexBundleMethodState(M::AbstractManifold, sub_problem=convex_bundle_method_subsolver; evaluation=AllocatingEvaluation(), kwargs...)\n\nGenerate the state for the convex_bundle_method on the manifold M\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nsub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nKeyword arguments\n\nMost of the following keyword arguments set default values for the fields mentioned before.\n\natol_λ=eps()\natol_errors=eps()\nbundle_cap=25`\nm=1e-2\ndiameter=50.0\ndomain=(M, p) -> isfinite(f(M, p))\nk_max=0\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstepsize=default_stepsize(M, ConvexBundleMethodState): a functor inheriting from Stepsize to determine a step size\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p) specify the type of tangent vector to use.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/convex_bundle_method/#Stopping-criteria","page":"Convex bundle method","title":"Stopping criteria","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"StopWhenLagrangeMultiplierLess","category":"page"},{"location":"solvers/convex_bundle_method/#Manopt.StopWhenLagrangeMultiplierLess","page":"Convex bundle method","title":"Manopt.StopWhenLagrangeMultiplierLess","text":"StopWhenLagrangeMultiplierLess <: StoppingCriterion\n\nStopping Criteria for Lagrange multipliers.\n\nCurrently these are meant for the convex_bundle_method and proximal_bundle_method, where based on the Lagrange multipliers an approximate (sub)gradient g and an error estimate ε is computed.\n\nThe mode=:both requires that both ε and lvert g rvert are smaller than their tolerances for the convex_bundle_method, and that c and lvert d rvert are smaller than their tolerances for the proximal_bundle_method.\n\nThe mode=:estimate requires that, for the convex_bundle_method -ξ = lvert g rvert^2 + ε is less than a given tolerance. For the proximal_bundle_method, the equation reads -ν = μ lvert d rvert^2 + c.\n\nConstructors\n\nStopWhenLagrangeMultiplierLess(tolerance=1e-6; mode::Symbol=:estimate, names=nothing)\n\nCreate the stopping criterion for one of the modes mentioned. Note that tolerance can be a single number for the :estimate case, but a vector of two values is required for the :both mode. Here the first entry specifies the tolerance for ε (c), the second the tolerance for lvert g rvert (lvert d rvert), respectively.\n\n\n\n\n\n","category":"type"},{"location":"solvers/convex_bundle_method/#Debug-functions","page":"Convex bundle method","title":"Debug functions","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"DebugWarnIfLagrangeMultiplierIncreases","category":"page"},{"location":"solvers/convex_bundle_method/#Manopt.DebugWarnIfLagrangeMultiplierIncreases","page":"Convex bundle method","title":"Manopt.DebugWarnIfLagrangeMultiplierIncreases","text":"DebugWarnIfLagrangeMultiplierIncreases <: DebugAction\n\nprint a warning if the Lagrange parameter based value -ξ of the bundle method increases.\n\nConstructor\n\nDebugWarnIfLagrangeMultiplierIncreases(warn=:Once; tol=1e2)\n\nInitialize the warning to warning level (:Once) and introduce a tolerance for the test of 1e2.\n\nThe warn level can be set to :Once to only warn the first time the cost increases, to :Always to report an increase every time it happens, and it can be set to :No to deactivate the warning, then this DebugAction is inactive. All other symbols are handled as if they were :Always:\n\n\n\n\n\n","category":"type"},{"location":"solvers/convex_bundle_method/#Helpers-and-internal-functions","page":"Convex bundle method","title":"Helpers and internal functions","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"convex_bundle_method_subsolver\nDomainBackTrackingStepsize","category":"page"},{"location":"solvers/convex_bundle_method/#Manopt.convex_bundle_method_subsolver","page":"Convex bundle method","title":"Manopt.convex_bundle_method_subsolver","text":"λ = convex_bundle_method_subsolver(M, p_last_serious, linearization_errors, transported_subgradients)\nconvex_bundle_method_subsolver!(M, λ, p_last_serious, linearization_errors, transported_subgradients)\n\nsolver for the subproblem of the convex bundle method at the last serious iterate p_k given the current linearization errors c_j^k, and transported subgradients mathrmP_p_kq_j X_q_j.\n\nThe computation can also be done in-place of λ.\n\nThe subproblem for the convex bundle method is\n\nbeginalign*\n operatorname*argmin_λ ℝ^lvert J_krvert\n frac12 BigllVert sum_j J_k λ_j mathrmP_p_kq_j X_q_j BigrrVert^2\n + sum_j J_k λ_j c_j^k\n \n texts tquad \n sum_j J_k λ_j = 1\n quad λ_j 0\n quad textfor all \n j J_k\nendalign*\n\nwhere J_k = j J_k-1 λ_j 0 cup k. See [BHJ24] for more details\n\ntip: Tip\nA default subsolver based on RipQP.jl and QuadraticModels is available if these two packages are loaded.\n\n\n\n\n\n","category":"function"},{"location":"solvers/convex_bundle_method/#Manopt.DomainBackTrackingStepsize","page":"Convex bundle method","title":"Manopt.DomainBackTrackingStepsize","text":"DomainBackTrackingStepsize <: Stepsize\n\nImplement a backtrack as long as we are q = operatornameretr_p(X) yields a point closer to p than lVert X rVert_p or q is not on the domain. For the domain this step size requires a ConvexBundleMethodState\n\n\n\n\n\n","category":"type"},{"location":"solvers/convex_bundle_method/#Literature","page":"Convex bundle method","title":"Literature","text":"","category":"section"},{"location":"solvers/convex_bundle_method/","page":"Convex bundle method","title":"Convex bundle method","text":"R. Bergmann, R. Herzog and H. Jasa. The Riemannian Convex Bundle Method, preprint (2024), arXiv:2402.13670.\n\n\n\n","category":"page"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"EditURL = \"https://github.com/JuliaManifolds/Manopt.jl/blob/master/Changelog.md\"","category":"page"},{"location":"changelog/#Changelog","page":"Changelog","title":"Changelog","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"All notable Changes to the Julia package Manopt.jl will be documented in this file. The file was started with Version 0.4.","category":"page"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.","category":"page"},{"location":"changelog/#[0.5.4]-unreleased","page":"Changelog","title":"[0.5.4] - unreleased","text":"","category":"section"},{"location":"changelog/#Added","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"An automated detection whether the tutorials are present if not an also no quarto run is done, an automated --exlcude-tutorials option is added.","category":"page"},{"location":"changelog/#[0.5.3]-–-October-18,-2024","page":"Changelog","title":"[0.5.3] – October 18, 2024","text":"","category":"section"},{"location":"changelog/#Added-2","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"StopWhenChangeLess, StopWhenGradientChangeLess and StopWhenGradientLess can now use the new idea (ManifoldsBase.jl 0.15.18) of different outer norms on manifolds with components like power and product manifolds and all others that support this from the Manifolds.jl Library, like Euclidean","category":"page"},{"location":"changelog/#Changed","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"stabilize max_Stepzise to also work when injectivity_radius dos not exist. It however would warn new users, that activate tutorial mode.\nStart a ManoptTestSuite subpackage to store dummy types and common test helpers in.","category":"page"},{"location":"changelog/#[0.5.2]-–-October-5,-2024","page":"Changelog","title":"[0.5.2] – October 5, 2024","text":"","category":"section"},{"location":"changelog/#Added-3","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"three new symbols to easier state to record the :Gradient, the :GradientNorm, and the :Stepsize.","category":"page"},{"location":"changelog/#Changed-2","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fix a few typos in the documentation\nimproved the documentation for the initial guess of ArmijoLinesearchStepsize.","category":"page"},{"location":"changelog/#[0.5.1]-–-September-4,-2024","page":"Changelog","title":"[0.5.1] – September 4, 2024","text":"","category":"section"},{"location":"changelog/#Changed-3","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"slightly improves the test for the ExponentialFamilyProjection text on the about page.","category":"page"},{"location":"changelog/#Added-4","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the proximal_point method.","category":"page"},{"location":"changelog/#[0.5.0]-–-August-29,-2024","page":"Changelog","title":"[0.5.0] – August 29, 2024","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"This breaking update is mainly concerned with improving a unified experience through all solvers and some usability improvements, such that for example the different gradient update rules are easier to specify.","category":"page"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"In general we introduce a few factories, that avoid having to pass the manifold to keyword arguments","category":"page"},{"location":"changelog/#Added-5","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A ManifoldDefaultsFactory that postpones the creation/allocation of manifold-specific fields in for example direction updates, step sizes and stopping criteria. As a rule of thumb, internal structures, like a solver state should store the final type. Any high-level interface, like the functions to start solvers, should accept such a factory in the appropriate places and call the internal _produce_type(factory, M), for example before passing something to the state.\na documentation_glossary.jl file containing a glossary of often used variables in fields, arguments, and keywords, to print them in a unified manner. The same for usual sections, tex, and math notation that is often used within the doc-strings.","category":"page"},{"location":"changelog/#Changed-4","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Any Stepsize now hase a Stepsize struct used internally as the original structs before. The newly exported terms aim to fit stepsize=... in naming and create a ManifoldDefaultsFactory instead, so that any stepsize can be created without explicitly specifying the manifold.\nConstantStepsize is no longer exported, use ConstantLength instead. The length parameter is now a positional argument following the (optonal) manifold. Besides that ConstantLength works as before,just that omitting the manifold fills the one specified in the solver now.\nDecreasingStepsize is no longer exported, use DecreasingLength instead. ConstantLength works as before,just that omitting the manifold fills the one specified in the solver now.\nArmijoLinesearch is now called ArmijoLinesearchStepsize. ArmijoLinesearch works as before,just that omitting the manifold fills the one specified in the solver now.\nWolfePowellLinesearch is now called WolfePowellLinesearchStepsize, its constant c_1 is now unified with Armijo and called sufficient_decrease, c_2 was renamed to sufficient_curvature. Besides that, WolfePowellLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.\nWolfePowellBinaryLinesearch is now called WolfePowellBinaryLinesearchStepsize, its constant c_1 is now unified with Armijo and called sufficient_decrease, c_2 was renamed to sufficient_curvature. Besides that, WolfePowellBinaryLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.\nNonmonotoneLinesearch is now called NonmonotoneLinesearchStepsize. NonmonotoneLinesearch works as before, just that omitting the manifold fills the one specified in the solver now.\nAdaptiveWNGradient is now called AdaptiveWNGradientStepsize. Its second positional argument, the gradient function was only evaluated once for the gradient_bound default, so it has been replaced by the keyword X= accepting a tangent vector. The last positional argument p has also been moved to a keyword argument. Besides that, AdaptiveWNGradient works as before, just that omitting the manifold fills the one specified in the solver now.\nAny DirectionUpdateRule now has the Rule in its name, since the original name is used to create the ManifoldDefaultsFactory instead. The original constructor now no longer requires the manifold as a parameter, that is later done in the factory. The Rule is, however, also no longer exported.\nAverageGradient is now called AverageGradientRule. AverageGradient works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.\nThe IdentityUpdateRule now accepts a manifold optionally for consistency, and you can use Gradient() for short as well as its factory. Hence direction=Gradient() is now available.\nMomentumGradient is now called MomentumGradientRule. MomentumGradient works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.\nNesterov is now called NesterovRule. Nesterov works as before, but the manifold as its first parameter is no longer necessary and p is now a keyword argument.\nConjugateDescentCoefficient is now called ConjugateDescentCoefficientRule. ConjugateDescentCoefficient works as before, but can now use the factory in between\nthe ConjugateGradientBealeRestart is now called ConjugateGradientBealeRestartRule. For the ConjugateGradientBealeRestart the manifold is now a first parameter, that is not necessary and no longer the manifold= keyword.\nDaiYuanCoefficient is now called DaiYuanCoefficientRule. For the DaiYuanCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.\nFletcherReevesCoefficient is now called FletcherReevesCoefficientRule. FletcherReevesCoefficient works as before, but can now use the factory in between\nHagerZhangCoefficient is now called HagerZhangCoefficientRule. For the HagerZhangCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.\nHestenesStiefelCoefficient is now called HestenesStiefelCoefficientRule. For the HestenesStiefelCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.\nLiuStoreyCoefficient is now called LiuStoreyCoefficientRule. For the LiuStoreyCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.\nPolakRibiereCoefficient is now called PolakRibiereCoefficientRule. For the PolakRibiereCoefficient the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to the vector_transport_method= keyword.\nthe SteepestDirectionUpdateRule is now called SteepestDescentCoefficientRule. The SteepestDescentCoefficient is equivalent, but creates the new factory interims wise.\nAbstractGradientGroupProcessor is now called AbstractGradientGroupDirectionRule\nthe StochasticGradient is now called StochasticGradientRule. The StochasticGradient is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.\nthe AlternatingGradient is now called AlternatingGradientRule.\nThe AlternatingGradient is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.\nquasi_Newton had a keyword scale_initial_operator= that was inconsistently declared (sometimes bool, sometimes real) and was unused. It is now called initial_scale=1.0 and scales the initial (diagonal, unit) matrix within the approximation of the Hessian additionally to the frac1lVert g_krVert scaling with the norm of the oldest gradient for the limited memory variant. For the full matrix variant the initial identity matrix is now scaled with this parameter.\nUnify doc strings and presentation of keyword arguments\ngeneral indexing, for example in a vector, uses i\nindex for inequality constraints is unified to i running from 1,...,m\nindex for equality constraints is unified to j running from 1,...,n\niterations are using now k\nget_manopt_parameter has been renamed to get_parameter since it is internal, so internally that is clear; accessing it from outside hence reads anyways Manopt.get_parameter\nset_manopt_parameter! has been renamed to set_parameter! since it is internal, so internally that is clear; accessing it from outside hence reads Manopt.set_parameter!\nchanged the stabilize::Bool= keyword in quasi_Newton to the more flexible project!= keyword, this is also more in line with the other solvers. Internally the same is done within the QuasiNewtonLimitedMemoryDirectionUpdate. To adapt,\nthe previous stabilize=true is now set with (project!)=embed_project! in general, and if the manifold is represented by points in the embedding, like the sphere, (project!)=project! suffices\nthe new default is (project!)=copyto!, so by default no projection/stabilization is performed.\nthe positional argument p (usually the last or the third to last if subsolvers existed) has been moved to a keyword argument p= in all State constructors\nin NelderMeadState the population moved from positional to keyword argument as well,\nthe way to initialise sub solvers in the solver states has been unified In the new variant\nthe sub_problem is always a positional argument; namely the last one\nif the sub_state is given as a optional positional argument after the problem, it has to be a manopt solver state\nyou can provide the new ClosedFormSolverState(e::AbstractEvaluationType) for the state to indicate that the sub_problem is a closed form solution (function call) and how it has to be called\nif you do not provide the sub_state as positional, the keyword evaluation= is used to generate the state ClosedFormSolverState.\nwhen previously p and eventually X where positional arguments, they are now moved to keyword arguments of the same name for start point and tangent vector.\nin detail\nAdaptiveRegularizationState(M, sub_problem [, sub_state]; kwargs...) replaces the (anyways unused) variant to only provide the objective; both X and p moved to keyword arguments.\nAugmentedLagrangianMethodState(M, objective, sub_problem; evaluation=...) was added\n`AugmentedLagrangianMethodState(M, objective, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one\nExactPenaltyMethodState(M, sub_problem; evaluation=...) was added and ExactPenaltyMethodState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one\nDifferenceOfConvexState(M, sub_problem; evaluation=...) was added and DifferenceOfConvexState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one\nDifferenceOfConvexProximalState(M, sub_problem; evaluation=...) was added and DifferenceOfConvexProximalState(M, sub_problem, sub_state; evaluation=...) now has p=rand(M) as keyword argument instead of being the second positional one\nbumped Manifolds.jlto version 0.10; this mainly means that any algorithm working on a productmanifold and requiring ArrayPartition now has to explicitly do using RecursiveArrayTools.","category":"page"},{"location":"changelog/#Fixed","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the AverageGradientRule filled its internal vector of gradients wrongly – or mixed it up in parallel transport. This is now fixed.","category":"page"},{"location":"changelog/#Removed","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the convex_bundle_method and its ConvexBundleMethodState no longer accept the keywords k_size, p_estimate nor ϱ, they are superseded by just providing k_max.\nthe truncated_conjugate_gradient_descent(M, f, grad_f, hess_f) has the Hessian now a mandatory argument. To use the old variant, provide ApproxHessianFiniteDifference(M, copy(M, p), grad_f) to hess_f directly.\nall deprecated keyword arguments and a few function signatures were removed:\nget_equality_constraints, get_equality_constraints!, get_inequality_constraints, get_inequality_constraints! are removed. Use their singular forms and set the index to : instead.\nStopWhenChangeLess(ε) is removed, use `StopWhenChangeLess(M, ε) instead to fill for example the retraction properly used to determine the change\nIn the WolfePowellLinesearch and WolfeBinaryLinesearchthe linesearch_stopsize= keyword is replaced by stop_when_stepsize_less=\nDebugChange and RecordChange had a manifold= and a invretr keyword that were replaced by the first positional argument M and inverse_retraction_method=, respectively\nin the NonlinearLeastSquaresObjective and LevenbergMarquardt the jacB= keyword is now called jacobian_tangent_basis=\nin particle_swarm the n= keyword is replaced by swarm_size=.\nupdate_stopping_criterion! has been removed and unified with set_parameter!. The code adaptions are\nto set a parameter of a stopping criterion, just replace update_stopping_criterion!(sc, :Val, v) with set_parameter!(sc, :Val, v)\nto update a stopping criterion in a solver state, replace the old update_stopping_criterion!(state, :Val, v) tat passed down to the stopping criterion by the explicit pass down with set_parameter!(state, :StoppingCriterion, :Val, v)","category":"page"},{"location":"changelog/#[0.4.69]-–-August-3,-2024","page":"Changelog","title":"[0.4.69] – August 3, 2024","text":"","category":"section"},{"location":"changelog/#Changed-5","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Improved performance of Interior Point Newton Method.","category":"page"},{"location":"changelog/#[0.4.68]-–-August-2,-2024","page":"Changelog","title":"[0.4.68] – August 2, 2024","text":"","category":"section"},{"location":"changelog/#Added-6","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"an Interior Point Newton Method, the interior_point_newton\na conjugate_residual Algorithm to solve a linear system on a tangent space.\nArmijoLinesearch now allows for additional additional_decrease_condition and additional_increase_condition keywords to add further conditions to accept additional conditions when to accept an decreasing or increase of the stepsize.\nadd a DebugFeasibility to have a debug print about feasibility of points in constrained optimisation employing the new is_feasible function\nadd a InteriorPointCentralityCondition check that can be added for step candidates within the line search of interior_point_newton\nAdd Several new functors\nthe LagrangianCost, LagrangianGradient, LagrangianHessian, that based on a constrained objective allow to construct the hessian objective of its Lagrangian\nthe CondensedKKTVectorField and its CondensedKKTVectorFieldJacobian, that are being used to solve a linear system within interior_point_newton\nthe KKTVectorField as well as its KKTVectorFieldJacobian and `KKTVectorFieldAdjointJacobian\nthe KKTVectorFieldNormSq and its KKTVectorFieldNormSqGradient used within the Armijo line search of interior_point_newton\nNew stopping criteria\nA StopWhenRelativeResidualLess for the conjugate_residual\nA StopWhenKKTResidualLess for the interior_point_newton","category":"page"},{"location":"changelog/#[0.4.67]-–-July-25,-2024","page":"Changelog","title":"[0.4.67] – July 25, 2024","text":"","category":"section"},{"location":"changelog/#Added-7","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"max_stepsize methods for Hyperrectangle.","category":"page"},{"location":"changelog/#Fixed-2","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"a few typos in the documentation\nWolfePowellLinesearch no longer uses max_stepsize with invalid point by default.","category":"page"},{"location":"changelog/#[0.4.66]-June-27,-2024","page":"Changelog","title":"[0.4.66] June 27, 2024","text":"","category":"section"},{"location":"changelog/#Changed-6","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Remove functions estimate_sectional_curvature, ζ_1, ζ_2, close_point from convex_bundle_method\nRemove some unused fields and arguments such as p_estimate, ϱ, α, from ConvexBundleMethodState in favor of jut k_max\nChange parameter R placement in ProximalBundleMethodState to fifth position","category":"page"},{"location":"changelog/#[0.4.65]-June-13,-2024","page":"Changelog","title":"[0.4.65] June 13, 2024","text":"","category":"section"},{"location":"changelog/#Changed-7","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"refactor stopping criteria to not store a sc.reason internally, but instead only generate the reason (and hence allocate a string) when actually asked for a reason.","category":"page"},{"location":"changelog/#[0.4.64]-June-4,-2024","page":"Changelog","title":"[0.4.64] June 4, 2024","text":"","category":"section"},{"location":"changelog/#Added-8","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Remodel the constraints and their gradients into separate VectorGradientFunctions to reduce code duplication and encapsulate the inner model of these functions and their gradients\nIntroduce a ConstrainedManoptProblem to model different ranges for the gradients in the new VectorGradientFunctions beyond the default NestedPowerRepresentation\nintroduce a VectorHessianFunction to also model that one can provide the vector of Hessians to constraints\nintroduce a more flexible indexing beyond single indexing, to also include arbitrary ranges when accessing vector functions and their gradients and hence also for constraints and their gradients.","category":"page"},{"location":"changelog/#Changed-8","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Remodel ConstrainedManifoldObjective to store an AbstractManifoldObjective internally instead of directly f and grad_f, allowing also Hessian objectives therein and implementing access to this Hessian\nFixed a bug that Lanczos produced NaNs when started exactly in a minimizer, since we divide by the gradient norm.","category":"page"},{"location":"changelog/#Deprecated","page":"Changelog","title":"Deprecated","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"deprecate get_grad_equality_constraints(M, o, p), use get_grad_equality_constraint(M, o, p, :) from the more flexible indexing instead.","category":"page"},{"location":"changelog/#[0.4.63]-May-11,-2024","page":"Changelog","title":"[0.4.63] May 11, 2024","text":"","category":"section"},{"location":"changelog/#Added-9","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":":reinitialize_direction_update option for quasi-Newton behavior when the direction is not a descent one. It is now the new default for QuasiNewtonState.\nQuasi-Newton direction update rules are now initialized upon start of the solver with the new internal function initialize_update!.","category":"page"},{"location":"changelog/#Fixed-3","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"ALM and EPM no longer keep a part of the quasi-Newton subsolver state between runs.","category":"page"},{"location":"changelog/#Changed-9","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Quasi-Newton solvers: :reinitialize_direction_update is the new default behavior in case of detection of non-descent direction instead of :step_towards_negative_gradient. :step_towards_negative_gradient is still available when explicitly set using the nondescent_direction_behavior keyword argument.","category":"page"},{"location":"changelog/#[0.4.62]-May-3,-2024","page":"Changelog","title":"[0.4.62] May 3, 2024","text":"","category":"section"},{"location":"changelog/#Changed-10","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"bumped dependency of ManifoldsBase.jl to 0.15.9 and imported their numerical verify functions. This changes the throw_error keyword used internally to a error= with a symbol.","category":"page"},{"location":"changelog/#[0.4.61]-April-27,-2024","page":"Changelog","title":"[0.4.61] April 27, 2024","text":"","category":"section"},{"location":"changelog/#Added-10","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Tests use Aqua.jl to spot problems in the code\nintroduce a feature-based list of solvers and reduce the details in the alphabetical list\nadds a PolyakStepsize\nadded a get_subgradient for AbstractManifoldGradientObjectives since their gradient is a special case of a subgradient.","category":"page"},{"location":"changelog/#Fixed-4","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"get_last_stepsize was defined in quite different ways that caused ambiguities. That is now internally a bit restructured and should work nicer. Internally this means that the interim dispatch on get_last_stepsize(problem, state, step, vars...) was removed. Now the only two left are get_last_stepsize(p, s, vars...) and the one directly checking get_last_stepsize(::Stepsize) for stored values.\nthe accidentally exported set_manopt_parameter! is no longer exported","category":"page"},{"location":"changelog/#Changed-11","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"get_manopt_parameter and set_manopt_parameter! have been revised and better documented, they now use more semantic symbols (with capital letters) instead of direct field access (lower letter symbols). Since these are not exported, this is considered an internal, hence non-breaking change.\nsemantic symbols are now all nouns in upper case letters\n:active is changed to :Activity","category":"page"},{"location":"changelog/#[0.4.60]-April-10,-2024","page":"Changelog","title":"[0.4.60] April 10, 2024","text":"","category":"section"},{"location":"changelog/#Added-11","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"RecordWhenActive to allow records to be deactivated during runtime, symbol :WhenActive\nRecordSubsolver to record the result of a subsolver recording in the main solver, symbol :Subsolver\nRecordStoppingReason to record the reason a solver stopped\nmade the RecordFactory more flexible and quite similar to DebugFactory, such that it is now also easy to specify recordings at the end of solver runs. This can especially be used to record final states of sub solvers.","category":"page"},{"location":"changelog/#Changed-12","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"being a bit more strict with internal tools and made the factories for record non-exported, so this is the same as for debug.","category":"page"},{"location":"changelog/#Fixed-5","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The name :Subsolver to generate DebugWhenActive was misleading, it is now called :WhenActive referring to “print debug only when set active, that is by the parent (main) solver”.\nthe old version of specifying Symbol => RecordAction for later access was ambiguous, since","category":"page"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"it could also mean to store the action in the dictionary under that symbol. Hence the order for access was switched to RecordAction => Symbol to resolve that ambiguity.","category":"page"},{"location":"changelog/#[0.4.59]-April-7,-2024","page":"Changelog","title":"[0.4.59] April 7, 2024","text":"","category":"section"},{"location":"changelog/#Added-12","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A Riemannian variant of the CMA-ES (Covariance Matrix Adaptation Evolutionary Strategy) algorithm, cma_es.","category":"page"},{"location":"changelog/#Fixed-6","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The constructor dispatch for StopWhenAny with Vector had incorrect element type assertion which was fixed.","category":"page"},{"location":"changelog/#[0.4.58]-March-18,-2024","page":"Changelog","title":"[0.4.58] March 18, 2024","text":"","category":"section"},{"location":"changelog/#Added-13","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"more advanced methods to add debug to the beginning of an algorithm, a step, or the end of the algorithm with DebugAction entries at :Start, :BeforeIteration, :Iteration, and :Stop, respectively.\nIntroduce a Pair-based format to add elements to these hooks, while all others ar now added to :Iteration (no longer to :All)\n(planned) add an easy possibility to also record the initial stage and not only after the first iteration.","category":"page"},{"location":"changelog/#Changed-13","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Changed the symbol for the :Step dictionary to be :Iteration, to unify this with the symbols used in recording, and removed the :All symbol. On the fine granular scale, all but :Start debugs are now reset on init. Since these are merely internal entries in the debug dictionary, this is considered non-breaking.\nintroduce a StopWhenSwarmVelocityLess stopping criterion for particle_swarm replacing the current default of the swarm change, since this is a bit more effective to compute","category":"page"},{"location":"changelog/#Fixed-7","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixed the outdated documentation of TruncatedConjugateGradientState, that now correctly state that p is no longer stored, but the algorithm runs on TpM.\nimplemented the missing get_iterate for TruncatedConjugateGradientState.","category":"page"},{"location":"changelog/#[0.4.57]-March-15,-2024","page":"Changelog","title":"[0.4.57] March 15, 2024","text":"","category":"section"},{"location":"changelog/#Changed-14","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"convex_bundle_method uses the sectional_curvature from ManifoldsBase.jl.\nconvex_bundle_method no longer has the unused k_min keyword argument.\nManifoldsBase.jl now is running on Documenter 1.3, Manopt.jl documentation now uses DocumenterInterLinks to refer to sections and functions from ManifoldsBase.jl","category":"page"},{"location":"changelog/#Fixed-8","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixes a type that when passing sub_kwargs to trust_regions caused an error in the decoration of the sub objective.","category":"page"},{"location":"changelog/#[0.4.56]-March-4,-2024","page":"Changelog","title":"[0.4.56] March 4, 2024","text":"","category":"section"},{"location":"changelog/#Added-14","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The option :step_towards_negative_gradient for nondescent_direction_behavior in quasi-Newton solvers does no longer emit a warning by default. This has been moved to a message, that can be accessed/displayed with DebugMessages\nDebugMessages now has a second positional argument, specifying whether all messages, or just the first (:Once) should be displayed.","category":"page"},{"location":"changelog/#[0.4.55]-March-3,-2024","page":"Changelog","title":"[0.4.55] March 3, 2024","text":"","category":"section"},{"location":"changelog/#Added-15","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Option nondescent_direction_behavior for quasi-Newton solvers. By default it checks for non-descent direction which may not be handled well by some stepsize selection algorithms.","category":"page"},{"location":"changelog/#Fixed-9","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"unified documentation, especially function signatures further.\nfixed a few typos related to math formulae in the doc strings.","category":"page"},{"location":"changelog/#[0.4.54]-February-28,-2024","page":"Changelog","title":"[0.4.54] February 28, 2024","text":"","category":"section"},{"location":"changelog/#Added-16","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"convex_bundle_method optimization algorithm for non-smooth geodesically convex functions\nproximal_bundle_method optimization algorithm for non-smooth functions.\nStopWhenSubgradientNormLess, StopWhenLagrangeMultiplierLess, and stopping criteria.","category":"page"},{"location":"changelog/#Fixed-10","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Doc strings now follow a vale.sh policy. Though this is not fully working, this PR improves a lot of the doc strings concerning wording and spelling.","category":"page"},{"location":"changelog/#[0.4.53]-February-13,-2024","page":"Changelog","title":"[0.4.53] February 13, 2024","text":"","category":"section"},{"location":"changelog/#Fixed-11","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixes two storage action defaults, that accidentally still tried to initialize a :Population (as modified back to :Iterate 0.4.49).\nfix a few typos in the documentation and add a reference for the subgradient method.","category":"page"},{"location":"changelog/#[0.4.52]-February-5,-2024","page":"Changelog","title":"[0.4.52] February 5, 2024","text":"","category":"section"},{"location":"changelog/#Added-17","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"introduce an environment persistent way of setting global values with the set_manopt_parameter! function using Preferences.jl.\nintroduce such a value named :Mode to enable a \"Tutorial\" mode that shall often provide more warnings and information for people getting started with optimisation on manifolds","category":"page"},{"location":"changelog/#[0.4.51]-January-30,-2024","page":"Changelog","title":"[0.4.51] January 30, 2024","text":"","category":"section"},{"location":"changelog/#Added-18","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A StopWhenSubgradientNormLess stopping criterion for subgradient-based optimization.\nAllow the message= of the DebugIfEntry debug action to contain a format element to print the field in the message as well.","category":"page"},{"location":"changelog/#[0.4.50]-January-26,-2024","page":"Changelog","title":"[0.4.50] January 26, 2024","text":"","category":"section"},{"location":"changelog/#Fixed-12","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Fix Quasi Newton on complex manifolds.","category":"page"},{"location":"changelog/#[0.4.49]-January-18,-2024","page":"Changelog","title":"[0.4.49] January 18, 2024","text":"","category":"section"},{"location":"changelog/#Added-19","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A StopWhenEntryChangeLess to be able to stop on arbitrary small changes of specific fields\ngeneralises StopWhenGradientNormLess to accept arbitrary norm= functions\nrefactor the default in particle_swarm to no longer “misuse” the iteration change, but actually the new one the :swarm entry","category":"page"},{"location":"changelog/#[0.4.48]-January-16,-2024","page":"Changelog","title":"[0.4.48] January 16, 2024","text":"","category":"section"},{"location":"changelog/#Fixed-13","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixes an imprecision in the interface of get_iterate that sometimes led to the swarm of particle_swarm being returned as the iterate.\nrefactor particle_swarm in naming and access functions to avoid this also in the future. To access the whole swarm, one now should use get_manopt_parameter(pss, :Population)","category":"page"},{"location":"changelog/#[0.4.47]-January-6,-2024","page":"Changelog","title":"[0.4.47] January 6, 2024","text":"","category":"section"},{"location":"changelog/#Fixed-14","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixed a bug, where the retraction set in check_Hessian was not passed on to the optional inner check_gradient call, which could lead to unwanted side effects, see #342.","category":"page"},{"location":"changelog/#[0.4.46]-January-1,-2024","page":"Changelog","title":"[0.4.46] January 1, 2024","text":"","category":"section"},{"location":"changelog/#Changed-15","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"An error is thrown when a line search from LineSearches.jl reports search failure.\nChanged default stopping criterion in ALM algorithm to mitigate an issue occurring when step size is very small.\nDefault memory length in default ALM subsolver is now capped at manifold dimension.\nReplaced CI testing on Julia 1.8 with testing on Julia 1.10.","category":"page"},{"location":"changelog/#Fixed-15","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A bug in LineSearches.jl extension leading to slower convergence.\nFixed a bug in L-BFGS related to memory storage, which caused significantly slower convergence.","category":"page"},{"location":"changelog/#[0.4.45]-December-28,-2023","page":"Changelog","title":"[0.4.45] December 28, 2023","text":"","category":"section"},{"location":"changelog/#Added-20","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Introduce sub_kwargs and sub_stopping_criterion for trust_regions as noticed in #336","category":"page"},{"location":"changelog/#Changed-16","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"WolfePowellLineSearch, ArmijoLineSearch step sizes now allocate less\nlinesearch_backtrack! is now available\nQuasi Newton Updates can work in-place of a direction vector as well.\nFaster safe_indices in L-BFGS.","category":"page"},{"location":"changelog/#[0.4.44]-December-12,-2023","page":"Changelog","title":"[0.4.44] December 12, 2023","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Formally one could consider this version breaking, since a few functions have been moved, that in earlier versions (0.3.x) have been used in example scripts. These examples are now available again within ManoptExamples.jl, and with their “reappearance” the corresponding costs, gradients, differentials, adjoint differentials, and proximal maps have been moved there as well. This is not considered breaking, since the functions were only used in the old, removed examples. Each and every moved function is still documented. They have been partly renamed, and their documentation and testing has been extended.","category":"page"},{"location":"changelog/#Changed-17","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Bumped and added dependencies on all 3 Project.toml files, the main one, the docs/, an the tutorials/ one.\nartificial_S2_lemniscate is available as ManoptExample.Lemniscate and works on arbitrary manifolds now.\nartificial_S1_signal is available as ManoptExample.artificial_S1_signal\nartificial_S1_slope_signal is available as ManoptExamples.artificial_S1_slope_signal\nartificial_S2_composite_bezier_curve is available as ManoptExamples.artificial_S2_composite_Bezier_curve\nartificial_S2_rotation_image is available as ManoptExamples.artificial_S2_rotation_image\nartificial_S2_whirl_image is available as ManoptExamples.artificial_S2_whirl_image\nartificial_S2_whirl_patch is available as ManoptExamples.artificial_S2_whirl_path\nartificial_SAR_image is available as ManoptExamples.artificial_SAR_image\nartificial_SPD_image is available as ManoptExamples.artificial_SPD_image\nartificial_SPD_image2 is available as ManoptExamples.artificial_SPD_image\nadjoint_differential_forward_logs is available as ManoptExamples.adjoint_differential_forward_logs\nadjoint:differential_bezier_control is available as ManoptExamples.adjoint_differential_Bezier_control_points\nBezierSegment is available as ManoptExamples.BeziérSegment\ncost_acceleration_bezier is available as ManoptExamples.acceleration_Bezier\ncost_L2_acceleration_bezier is available as ManoptExamples.L2_acceleration_Bezier\ncostIntrICTV12 is available as ManoptExamples.Intrinsic_infimal_convolution_TV12\ncostL2TV is available as ManoptExamples.L2_Total_Variation\ncostL2TV12 is available as ManoptExamples.L2_Total_Variation_1_2\ncostL2TV2 is available as ManoptExamples.L2_second_order_Total_Variation\ncostTV is available as ManoptExamples.Total_Variation\ncostTV2 is available as ManoptExamples.second_order_Total_Variation\nde_casteljau is available as ManoptExamples.de_Casteljau\ndifferential_forward_logs is available as ManoptExamples.differential_forward_logs\ndifferential_bezier_control is available as ManoptExamples.differential_Bezier_control_points\nforward_logs is available as ManoptExamples.forward_logs\nget_bezier_degree is available as ManoptExamples.get_Bezier_degree\nget_bezier_degrees is available as ManoptExamples.get_Bezier_degrees\nget_Bezier_inner_points is available as ManoptExamples.get_Bezier_inner_points\nget_bezier_junction_tangent_vectors is available as ManoptExamples.get_Bezier_junction_tangent_vectors\nget_bezier_junctions is available as ManoptExamples.get_Bezier_junctions\nget_bezier_points is available as ManoptExamples.get_Bezier_points\nget_bezier_segments is available as ManoptExamples.get_Bezier_segments\ngrad_acceleration_bezier is available as ManoptExamples.grad_acceleration_Bezier\ngrad_L2_acceleration_bezier is available as ManoptExamples.grad_L2_acceleration_Bezier\ngrad_Intrinsic_infimal_convolution_TV12 is available as ManoptExamples.Intrinsic_infimal_convolution_TV12\ngrad_TV is available as ManoptExamples.grad_Total_Variation\ncostIntrICTV12 is available as ManoptExamples.Intrinsic_infimal_convolution_TV12\nproject_collaborative_TV is available as ManoptExamples.project_collaborative_TV\nprox_parallel_TV is available as ManoptExamples.prox_parallel_TV\ngrad_TV2 is available as ManoptExamples.prox_second_order_Total_Variation\nprox_TV is available as ManoptExamples.prox_Total_Variation\nprox_TV2 is available as ManopExamples.prox_second_order_Total_Variation","category":"page"},{"location":"changelog/#[0.4.43]-November-19,-2023","page":"Changelog","title":"[0.4.43] November 19, 2023","text":"","category":"section"},{"location":"changelog/#Added-21","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"vale.sh as a CI to keep track of a consistent documentation","category":"page"},{"location":"changelog/#[0.4.42]-November-6,-2023","page":"Changelog","title":"[0.4.42] November 6, 2023","text":"","category":"section"},{"location":"changelog/#Added-22","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"add Manopt.JuMP_Optimizer implementing JuMP's solver interface","category":"page"},{"location":"changelog/#[0.4.41]-November-2,-2023","page":"Changelog","title":"[0.4.41] November 2, 2023","text":"","category":"section"},{"location":"changelog/#Changed-18","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"trust_regions is now more flexible and the sub solver (Steihaug-Toint tCG by default) can now be exchanged.\nadaptive_regularization_with_cubics is now more flexible as well, where it previously was a bit too much tightened to the Lanczos solver as well.\nUnified documentation notation and bumped dependencies to use DocumenterCitations 1.3","category":"page"},{"location":"changelog/#[0.4.40]-October-24,-2023","page":"Changelog","title":"[0.4.40] October 24, 2023","text":"","category":"section"},{"location":"changelog/#Added-23","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"add a --help argument to docs/make.jl to document all available command line arguments\nadd a --exclude-tutorials argument to docs/make.jl. This way, when quarto is not available on a computer, the docs can still be build with the tutorials not being added to the menu such that documenter does not expect them to exist.","category":"page"},{"location":"changelog/#Changes","page":"Changelog","title":"Changes","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Bump dependencies to ManifoldsBase.jl 0.15 and Manifolds.jl 0.9\nmove the ARC CG subsolver to the main package, since TangentSpace is now already available from ManifoldsBase.","category":"page"},{"location":"changelog/#[0.4.39]-October-9,-2023","page":"Changelog","title":"[0.4.39] October 9, 2023","text":"","category":"section"},{"location":"changelog/#Changes-2","page":"Changelog","title":"Changes","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"also use the pair of a retraction and the inverse retraction (see last update) to perform the relaxation within the Douglas-Rachford algorithm.","category":"page"},{"location":"changelog/#[0.4.38]-October-8,-2023","page":"Changelog","title":"[0.4.38] October 8, 2023","text":"","category":"section"},{"location":"changelog/#Changes-3","page":"Changelog","title":"Changes","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"avoid allocations when calling get_jacobian! within the Levenberg-Marquard Algorithm.","category":"page"},{"location":"changelog/#Fixed-16","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Fix a lot of typos in the documentation","category":"page"},{"location":"changelog/#[0.4.37]-September-28,-2023","page":"Changelog","title":"[0.4.37] September 28, 2023","text":"","category":"section"},{"location":"changelog/#Changes-4","page":"Changelog","title":"Changes","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"add more of the Riemannian Levenberg-Marquard algorithms parameters as keywords, so they can be changed on call\ngeneralize the internal reflection of Douglas-Rachford, such that is also works with an arbitrary pair of a reflection and an inverse reflection.","category":"page"},{"location":"changelog/#[0.4.36]-September-20,-2023","page":"Changelog","title":"[0.4.36] September 20, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-17","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Fixed a bug that caused non-matrix points and vectors to fail when working with approximate","category":"page"},{"location":"changelog/#[0.4.35]-September-14,-2023","page":"Changelog","title":"[0.4.35] September 14, 2023","text":"","category":"section"},{"location":"changelog/#Added-24","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The access to functions of the objective is now unified and encapsulated in proper get_ functions.","category":"page"},{"location":"changelog/#[0.4.34]-September-02,-2023","page":"Changelog","title":"[0.4.34] September 02, 2023","text":"","category":"section"},{"location":"changelog/#Added-25","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"an ManifoldEuclideanGradientObjective to allow the cost, gradient, and Hessian and other first or second derivative based elements to be Euclidean and converted when needed.\na keyword objective_type=:Euclidean for all solvers, that specifies that an Objective shall be created of the new type","category":"page"},{"location":"changelog/#[0.4.33]-August-24,-2023","page":"Changelog","title":"[0.4.33] August 24, 2023","text":"","category":"section"},{"location":"changelog/#Added-26","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"ConstantStepsize and DecreasingStepsize now have an additional field type::Symbol to assess whether the step-size should be relatively (to the gradient norm) or absolutely constant.","category":"page"},{"location":"changelog/#[0.4.32]-August-23,-2023","page":"Changelog","title":"[0.4.32] August 23, 2023","text":"","category":"section"},{"location":"changelog/#Added-27","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The adaptive regularization with cubics (ARC) solver.","category":"page"},{"location":"changelog/#[0.4.31]-August-14,-2023","page":"Changelog","title":"[0.4.31] August 14, 2023","text":"","category":"section"},{"location":"changelog/#Added-28","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A :Subsolver keyword in the debug= keyword argument, that activates the new DebugWhenActiveto de/activate subsolver debug from the main solversDebugEvery`.","category":"page"},{"location":"changelog/#[0.4.30]-August-3,-2023","page":"Changelog","title":"[0.4.30] August 3, 2023","text":"","category":"section"},{"location":"changelog/#Changed-19","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"References in the documentation are now rendered using DocumenterCitations.jl\nAsymptote export now also accepts a size in pixel instead of its default 4cm size and render can be deactivated setting it to nothing.","category":"page"},{"location":"changelog/#[0.4.29]-July-12,-2023","page":"Changelog","title":"[0.4.29] July 12, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-18","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fixed a bug, where cyclic_proximal_point did not work with decorated objectives.","category":"page"},{"location":"changelog/#[0.4.28]-June-24,-2023","page":"Changelog","title":"[0.4.28] June 24, 2023","text":"","category":"section"},{"location":"changelog/#Changed-20","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"max_stepsize was specialized for FixedRankManifold to follow Matlab Manopt.","category":"page"},{"location":"changelog/#[0.4.27]-June-15,-2023","page":"Changelog","title":"[0.4.27] June 15, 2023","text":"","category":"section"},{"location":"changelog/#Added-29","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"The AdaptiveWNGrad stepsize is available as a new stepsize functor.","category":"page"},{"location":"changelog/#Fixed-19","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Levenberg-Marquardt now possesses its parameters initial_residual_values and initial_jacobian_f also as keyword arguments, such that their default initialisations can be adapted, if necessary","category":"page"},{"location":"changelog/#[0.4.26]-June-11,-2023","page":"Changelog","title":"[0.4.26] June 11, 2023","text":"","category":"section"},{"location":"changelog/#Added-30","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"simplify usage of gradient descent as sub solver in the DoC solvers.\nadd a get_state function\ndocument indicates_convergence.","category":"page"},{"location":"changelog/#[0.4.25]-June-5,-2023","page":"Changelog","title":"[0.4.25] June 5, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-20","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Fixes an allocation bug in the difference of convex algorithm","category":"page"},{"location":"changelog/#[0.4.24]-June-4,-2023","page":"Changelog","title":"[0.4.24] June 4, 2023","text":"","category":"section"},{"location":"changelog/#Added-31","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"another workflow that deletes old PR renderings from the docs to keep them smaller in overall size.","category":"page"},{"location":"changelog/#Changes-5","page":"Changelog","title":"Changes","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"bump dependencies since the extension between Manifolds.jl and ManifoldsDiff.jl has been moved to Manifolds.jl","category":"page"},{"location":"changelog/#[0.4.23]-June-4,-2023","page":"Changelog","title":"[0.4.23] June 4, 2023","text":"","category":"section"},{"location":"changelog/#Added-32","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"More details on the Count and Cache tutorial","category":"page"},{"location":"changelog/#Changed-21","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"loosen constraints slightly","category":"page"},{"location":"changelog/#[0.4.22]-May-31,-2023","page":"Changelog","title":"[0.4.22] May 31, 2023","text":"","category":"section"},{"location":"changelog/#Added-33","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A tutorial on how to implement a solver","category":"page"},{"location":"changelog/#[0.4.21]-May-22,-2023","page":"Changelog","title":"[0.4.21] May 22, 2023","text":"","category":"section"},{"location":"changelog/#Added-34","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A ManifoldCacheObjective as a decorator for objectives to cache results of calls, using LRU Caches as a weak dependency. For now this works with cost and gradient evaluations\nA ManifoldCountObjective as a decorator for objectives to enable counting of calls to for example the cost and the gradient\nadds a return_objective keyword, that switches the return of a solver to a tuple (o, s), where o is the (possibly decorated) objective, and s is the “classical” solver return (state or point). This way the counted values can be accessed and the cache can be reused.\nchange solvers on the mid level (form solver(M, objective, p)) to also accept decorated objectives","category":"page"},{"location":"changelog/#Changed-22","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Switch all Requires weak dependencies to actual weak dependencies starting in Julia 1.9","category":"page"},{"location":"changelog/#[0.4.20]-May-11,-2023","page":"Changelog","title":"[0.4.20] May 11, 2023","text":"","category":"section"},{"location":"changelog/#Changed-23","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the default tolerances for the numerical check_ functions were loosened a bit, such that check_vector can also be changed in its tolerances.","category":"page"},{"location":"changelog/#[0.4.19]-May-7,-2023","page":"Changelog","title":"[0.4.19] May 7, 2023","text":"","category":"section"},{"location":"changelog/#Added-35","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the sub solver for trust_regions is now customizable and can now be exchanged.","category":"page"},{"location":"changelog/#Changed-24","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"slightly changed the definitions of the solver states for ALM and EPM to be type stable","category":"page"},{"location":"changelog/#[0.4.18]-May-4,-2023","page":"Changelog","title":"[0.4.18] May 4, 2023","text":"","category":"section"},{"location":"changelog/#Added-36","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A function check_Hessian(M, f, grad_f, Hess_f) to numerically verify the (Riemannian) Hessian of a function f","category":"page"},{"location":"changelog/#[0.4.17]-April-28,-2023","page":"Changelog","title":"[0.4.17] April 28, 2023","text":"","category":"section"},{"location":"changelog/#Added-37","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"A new interface of the form alg(M, objective, p0) to allow to reuse objectives without creating AbstractManoptSolverStates and calling solve!. This especially still allows for any decoration of the objective and/or the state using debug=, or record=.","category":"page"},{"location":"changelog/#Changed-25","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"All solvers now have the initial point p as an optional parameter making it more accessible to first time users, gradient_descent(M, f, grad_f) is equivalent to gradient_descent(M, f, grad_f, rand(M))","category":"page"},{"location":"changelog/#Fixed-21","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Unified the framework to work on manifold where points are represented by numbers for several solvers","category":"page"},{"location":"changelog/#[0.4.16]-April-18,-2023","page":"Changelog","title":"[0.4.16] April 18, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-22","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the inner products used in truncated_gradient_descent now also work thoroughly on complex matrix manifolds","category":"page"},{"location":"changelog/#[0.4.15]-April-13,-2023","page":"Changelog","title":"[0.4.15] April 13, 2023","text":"","category":"section"},{"location":"changelog/#Changed-26","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"trust_regions(M, f, grad_f, hess_f, p) now has the Hessian hess_f as well as the start point p0 as an optional parameter and approximate it otherwise.\ntrust_regions!(M, f, grad_f, hess_f, p) has the Hessian as an optional parameter and approximate it otherwise.","category":"page"},{"location":"changelog/#Removed-2","page":"Changelog","title":"Removed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"support for ManifoldsBase.jl 0.13.x, since with the definition of copy(M,p::Number), in 0.14.4, that one is used instead of defining it ourselves.","category":"page"},{"location":"changelog/#[0.4.14]-April-06,-2023","page":"Changelog","title":"[0.4.14] April 06, 2023","text":"","category":"section"},{"location":"changelog/#Changed-27","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"particle_swarm now uses much more in-place operations","category":"page"},{"location":"changelog/#Fixed-23","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"particle_swarm used quite a few deepcopy(p) commands still, which were replaced by copy(M, p)","category":"page"},{"location":"changelog/#[0.4.13]-April-09,-2023","page":"Changelog","title":"[0.4.13] April 09, 2023","text":"","category":"section"},{"location":"changelog/#Added-38","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"get_message to obtain messages from sub steps of a solver\nDebugMessages to display the new messages in debug\nsafeguards in Armijo line search and L-BFGS against numerical over- and underflow that report in messages","category":"page"},{"location":"changelog/#[0.4.12]-April-4,-2023","page":"Changelog","title":"[0.4.12] April 4, 2023","text":"","category":"section"},{"location":"changelog/#Added-39","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Introduce the Difference of Convex Algorithm (DCA) difference_of_convex_algorithm(M, f, g, ∂h, p0)\nIntroduce the Difference of Convex Proximal Point Algorithm (DCPPA) difference_of_convex_proximal_point(M, prox_g, grad_h, p0)\nIntroduce a StopWhenGradientChangeLess stopping criterion","category":"page"},{"location":"changelog/#[0.4.11]-March-27,-2023","page":"Changelog","title":"[0.4.11] March 27, 2023","text":"","category":"section"},{"location":"changelog/#Changed-28","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"adapt tolerances in tests to the speed/accuracy optimized distance on the sphere in Manifolds.jl (part II)","category":"page"},{"location":"changelog/#[0.4.10]-March-26,-2023","page":"Changelog","title":"[0.4.10] March 26, 2023","text":"","category":"section"},{"location":"changelog/#Changed-29","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"adapt tolerances in tests to the speed/accuracy optimized distance on the sphere in Manifolds.jl","category":"page"},{"location":"changelog/#[0.4.9]-March-3,-2023","page":"Changelog","title":"[0.4.9] March 3, 2023","text":"","category":"section"},{"location":"changelog/#Added-40","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"introduce a wrapper that allows line searches from LineSearches.jl to be used within Manopt.jl, introduce the manoptjl.org/stable/extensions/ page to explain the details.","category":"page"},{"location":"changelog/#[0.4.8]-February-21,-2023","page":"Changelog","title":"[0.4.8] February 21, 2023","text":"","category":"section"},{"location":"changelog/#Added-41","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"a status_summary that displays the main parameters within several structures of Manopt, most prominently a solver state","category":"page"},{"location":"changelog/#Changed-30","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Improved storage performance by introducing separate named tuples for points and vectors\nchanged the show methods of AbstractManoptSolverStates to display their `state_summary\nMove tutorials to be rendered with Quarto into the documentation.","category":"page"},{"location":"changelog/#[0.4.7]-February-14,-2023","page":"Changelog","title":"[0.4.7] February 14, 2023","text":"","category":"section"},{"location":"changelog/#Changed-31","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Bump [compat] entry of ManifoldDiff to also include 0.3","category":"page"},{"location":"changelog/#[0.4.6]-February-3,-2023","page":"Changelog","title":"[0.4.6] February 3, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-24","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Fixed a few stopping criteria even indicated to stop before the algorithm started.","category":"page"},{"location":"changelog/#[0.4.5]-January-24,-2023","page":"Changelog","title":"[0.4.5] January 24, 2023","text":"","category":"section"},{"location":"changelog/#Changed-32","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the new default functions that include p are used where possible\na first step towards faster storage handling","category":"page"},{"location":"changelog/#[0.4.4]-January-20,-2023","page":"Changelog","title":"[0.4.4] January 20, 2023","text":"","category":"section"},{"location":"changelog/#Added-42","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Introduce ConjugateGradientBealeRestart to allow CG restarts using Beale‘s rule","category":"page"},{"location":"changelog/#Fixed-25","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"fix a type in HestenesStiefelCoefficient","category":"page"},{"location":"changelog/#[0.4.3]-January-17,-2023","page":"Changelog","title":"[0.4.3] January 17, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-26","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the CG coefficient β can now be complex\nfix a bug in grad_distance","category":"page"},{"location":"changelog/#[0.4.2]-January-16,-2023","page":"Changelog","title":"[0.4.2] January 16, 2023","text":"","category":"section"},{"location":"changelog/#Changed-33","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"the usage of inner in line search methods, such that they work well with complex manifolds as well","category":"page"},{"location":"changelog/#[0.4.1]-January-15,-2023","page":"Changelog","title":"[0.4.1] January 15, 2023","text":"","category":"section"},{"location":"changelog/#Fixed-27","page":"Changelog","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"a max_stepsize per manifold to avoid leaving the injectivity radius, which it also defaults to","category":"page"},{"location":"changelog/#[0.4.0]-January-10,-2023","page":"Changelog","title":"[0.4.0] January 10, 2023","text":"","category":"section"},{"location":"changelog/#Added-43","page":"Changelog","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"Dependency on ManifoldDiff.jl and a start of moving actual derivatives, differentials, and gradients there.\nAbstractManifoldObjective to store the objective within the AbstractManoptProblem\nIntroduce a CostGrad structure to store a function that computes the cost and gradient within one function.\nstarted a changelog.md to thoroughly keep track of changes","category":"page"},{"location":"changelog/#Changed-34","page":"Changelog","title":"Changed","text":"","category":"section"},{"location":"changelog/","page":"Changelog","title":"Changelog","text":"AbstractManoptProblem replaces Problem\nthe problem now contains a\nAbstractManoptSolverState replaces Options\nrandom_point(M) is replaced by rand(M) from `ManifoldsBase.jl\nrandom_tangent(M, p) is replaced by rand(M; vector_at=p)","category":"page"},{"location":"solvers/gradient_descent/#Gradient-descent","page":"Gradient Descent","title":"Gradient descent","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"gradient_descent\ngradient_descent!","category":"page"},{"location":"solvers/gradient_descent/#Manopt.gradient_descent","page":"Gradient Descent","title":"Manopt.gradient_descent","text":"gradient_descent(M, f, grad_f, p=rand(M); kwargs...)\ngradient_descent(M, gradient_objective, p=rand(M); kwargs...)\ngradient_descent!(M, f, grad_f, p; kwargs...)\ngradient_descent!(M, gradient_objective, p; kwargs...)\n\nperform the gradient descent algorithm\n\np_k+1 = operatornameretr_p_kbigl( s_koperatornamegradf(p_k) bigr)\nqquad k=01\n\nwhere s_k 0 denotes a step size.\n\nThe algorithm can be performed in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nAlternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.\n\nKeyword arguments\n\ndirection=IdentityUpdateRule(): specify to perform a certain processing of the direction, for example Nesterov, MomentumGradient or AverageGradient.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=default_stepsize(M, GradientDescentState): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nIf you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gradient_descent/#Manopt.gradient_descent!","page":"Gradient Descent","title":"Manopt.gradient_descent!","text":"gradient_descent(M, f, grad_f, p=rand(M); kwargs...)\ngradient_descent(M, gradient_objective, p=rand(M); kwargs...)\ngradient_descent!(M, f, grad_f, p; kwargs...)\ngradient_descent!(M, gradient_objective, p; kwargs...)\n\nperform the gradient descent algorithm\n\np_k+1 = operatornameretr_p_kbigl( s_koperatornamegradf(p_k) bigr)\nqquad k=01\n\nwhere s_k 0 denotes a step size.\n\nThe algorithm can be performed in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nAlternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.\n\nKeyword arguments\n\ndirection=IdentityUpdateRule(): specify to perform a certain processing of the direction, for example Nesterov, MomentumGradient or AverageGradient.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=default_stepsize(M, GradientDescentState): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nIf you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gradient_descent/#State","page":"Gradient Descent","title":"State","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"GradientDescentState","category":"page"},{"location":"solvers/gradient_descent/#Manopt.GradientDescentState","page":"Gradient Descent","title":"Manopt.GradientDescentState","text":"GradientDescentState{P,T} <: AbstractGradientSolverState\n\nDescribes the state of a gradient based descent algorithm.\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\ndirection::DirectionUpdateRule : a processor to handle the obtained gradient and compute a direction to “walk into”.\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\n\nConstructor\n\nGradientDescentState(M::AbstractManifold; kwargs...)\n\nInitialize the gradient descent solver state, where\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\n\nKeyword arguments\n\ndirection=IdentityUpdateRule()\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopAfterIteration(100): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, GradientDescentState; retraction_method=retraction_method): a functor inheriting from Stepsize to determine a step size\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nSee also\n\ngradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Direction-update-rules","page":"Gradient Descent","title":"Direction update rules","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"A field of the options is the direction, a DirectionUpdateRule, which by default IdentityUpdateRule just evaluates the gradient but can be enhanced for example to","category":"page"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"AverageGradient\nDirectionUpdateRule\nIdentityUpdateRule\nMomentumGradient\nNesterov","category":"page"},{"location":"solvers/gradient_descent/#Manopt.AverageGradient","page":"Gradient Descent","title":"Manopt.AverageGradient","text":"AverageGradient(; kwargs...)\nAverageGradient(M::AbstractManifold; kwargs...)\n\nAdd an average of gradients to a gradient processor. A set of previous directions (from the inner processor) and the last iterate are stored, average is taken after vector transporting them to the current iterates tangent space.\n\nInput\n\nM (optional)\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\ndirection=IdentityUpdateRule preprocess the actual gradient before adding momentum\ngradients=[zero_vector(M, p) for _ in 1:n] how to initialise the internal storage\nn=10 number of gradient evaluations to take the mean over\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for AverageGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gradient_descent/#Manopt.DirectionUpdateRule","page":"Gradient Descent","title":"Manopt.DirectionUpdateRule","text":"DirectionUpdateRule\n\nA general functor, that handles direction update rules. It's fields are usually only a StoreStateAction by default initialized to the fields required for the specific coefficient, but can also be replaced by a (common, global) individual one that provides these values.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.IdentityUpdateRule","page":"Gradient Descent","title":"Manopt.IdentityUpdateRule","text":"IdentityUpdateRule <: DirectionUpdateRule\n\nThe default gradient direction update is the identity, usually it just evaluates the gradient.\n\nYou can also use Gradient() to create the corresponding factory, though this only delays this parameter-free instantiation to later.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.MomentumGradient","page":"Gradient Descent","title":"Manopt.MomentumGradient","text":"MomentumGradient()\n\nAppend a momentum to a gradient processor, where the last direction and last iterate are stored and the new is composed as η_i = m*η_i-1 - s d_i, where sd_i is the current (inner) direction and η_i-1 is the vector transported last direction multiplied by momentum m.\n\nInput\n\nM (optional)\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal M\ndirection=IdentityUpdateRule preprocess the actual gradient before adding momentum\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\nmomentum=0.2 amount of momentum to use\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for MomentumGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gradient_descent/#Manopt.Nesterov","page":"Gradient Descent","title":"Manopt.Nesterov","text":"Nesterov(; kwargs...)\nNesterov(M::AbstractManifold; kwargs...)\n\nAssume f is L-Lipschitz and μ-strongly convex. Given\n\na step size h_kfrac1L (from the GradientDescentState\na shrinkage parameter β_k\nand a current iterate p_k\nas well as the interim values γ_k and v_k from the previous iterate.\n\nThis compute a Nesterov type update using the following steps, see [ZS18]\n\nCompute the positive root α_k(01) of α^2 = h_kbigl((1-α_k)γ_k+α_k μbigr).\nSet barγ_k+1 = (1-α_k)γ_k + α_kμ\ny_k = operatornameretr_p_kBigl(fracα_kγ_kγ_k + α_kμoperatornameretr^-1_p_kv_k Bigr)\nx_k+1 = operatornameretr_y_k(-h_k operatornamegradf(y_k))\nv_k+1 = operatornameretr_y_kBigl(frac(1-α_k)γ_kbarγ_koperatornameretr_y_k^-1(v_k) - fracα_kbarγ_k+1operatornamegradf(y_k) Bigr)\nγ_k+1 = frac11+β_kbarγ_k+1\n\nThen the direction from p_k to p_k+1 by d = operatornameretr^-1_p_kp_k+1 is returned.\n\nInput\n\nM (optional)\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nγ=0.001`\nμ=0.9`\nshrinkage = k -> 0.8\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for NesterovRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"which internally use the ManifoldDefaultsFactory and produce the internal elements","category":"page"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"Manopt.AverageGradientRule\nManopt.ConjugateDescentCoefficientRule\nManopt.MomentumGradientRule\nManopt.NesterovRule","category":"page"},{"location":"solvers/gradient_descent/#Manopt.AverageGradientRule","page":"Gradient Descent","title":"Manopt.AverageGradientRule","text":"AverageGradientRule <: DirectionUpdateRule\n\nAdd an average of gradients to a gradient processor. A set of previous directions (from the inner processor) and the last iterate are stored. The average is taken after vector transporting them to the current iterates tangent space.\n\nFields\n\ngradients: the last n gradient/direction updates\nlast_iterate: last iterate (needed to transport the gradients)\ndirection: internal DirectionUpdateRule to determine directions to apply the averaging to\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructors\n\nAverageGradientRule(\n M::AbstractManifold;\n p::P=rand(M);\n n::Int=10\n direction::Union{<:DirectionUpdateRule,ManifoldDefaultsFactory}=IdentityUpdateRule(),\n gradients = fill(zero_vector(p.M, o.x),n),\n last_iterate = deepcopy(x0),\n vector_transport_method = default_vector_transport_method(M, typeof(p))\n)\n\nAdd average to a gradient problem, where\n\nn: determines the size of averaging\ndirection: is the internal DirectionUpdateRule to determine the gradients to store\ngradients: can be pre-filled with some history\nlast_iterate: stores the last iterate\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.ConjugateDescentCoefficientRule","page":"Gradient Descent","title":"Manopt.ConjugateDescentCoefficientRule","text":"ConjugateDescentCoefficientRule <: DirectionUpdateRule\n\nA functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient adapted to manifolds\n\nSee also conjugate_gradient_descent\n\nConstructor\n\nConjugateDescentCoefficientRule()\n\nConstruct the conjugate descent coefficient update rule, a new storage is created by default.\n\nSee also\n\nConjugateDescentCoefficient, conjugate_gradient_descent\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.MomentumGradientRule","page":"Gradient Descent","title":"Manopt.MomentumGradientRule","text":"MomentumGradientRule <: DirectionUpdateRule\n\nStore the necessary information to compute the MomentumGradient direction update.\n\nFields\n\np_old::P: a point on the manifold mathcal M\nmomentum::Real: factor for the momentum\ndirection: internal DirectionUpdateRule to determine directions to add the momentum to.\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nX_old::T: a tangent vector at the point p on the manifold mathcal M\n\nConstructors\n\nMomentumGradientRule(M::AbstractManifold; kwargs...)\n\nInitialize a momentum gradient rule to s, where p and X are memory for interim values.\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal M\ns=IdentityUpdateRule()\nmomentum=0.2\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nSee also\n\nMomentumGradient\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.NesterovRule","page":"Gradient Descent","title":"Manopt.NesterovRule","text":"NesterovRule <: DirectionUpdateRule\n\nCompute a Nesterov inspired direction update rule. See Nesterov for details\n\nFields\n\nγ::Real, μ::Real: coefficients from the last iterate\nv::P: an interim point to compute the next gradient evaluation point y_k\nshrinkage: a function k -> ... to compute the shrinkage β_k per iterate k`.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\nConstructor\n\nNesterovRule(M::AbstractManifold; kwargs...)\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nγ=0.001`\nμ=0.9`\nshrinkage = k -> 0.8\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\nSee also\n\nNesterov\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Debug-actions","page":"Gradient Descent","title":"Debug actions","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"DebugGradient\nDebugGradientNorm\nDebugStepsize","category":"page"},{"location":"solvers/gradient_descent/#Manopt.DebugGradient","page":"Gradient Descent","title":"Manopt.DebugGradient","text":"DebugGradient <: DebugAction\n\ndebug for the gradient evaluated at the current iterate\n\nConstructors\n\nDebugGradient(; long=false, prefix= , format= \"$prefix%s\", io=stdout)\n\ndisplay the short (false) or long (true) default text for the gradient, or set the prefix manually. Alternatively the complete format can be set.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.DebugGradientNorm","page":"Gradient Descent","title":"Manopt.DebugGradientNorm","text":"DebugGradientNorm <: DebugAction\n\ndebug for gradient evaluated at the current iterate.\n\nConstructors\n\nDebugGradientNorm([long=false,p=print])\n\ndisplay the short (false) or long (true) default text for the gradient norm.\n\nDebugGradientNorm(prefix[, p=print])\n\ndisplay the a prefix in front of the gradient norm.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.DebugStepsize","page":"Gradient Descent","title":"Manopt.DebugStepsize","text":"DebugStepsize <: DebugAction\n\ndebug for the current step size.\n\nConstructors\n\nDebugStepsize(;long=false,prefix=\"step size:\", format=\"$prefix%s\", io=stdout)\n\ndisplay the a prefix in front of the step size.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Record-actions","page":"Gradient Descent","title":"Record actions","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"RecordGradient\nRecordGradientNorm\nRecordStepsize","category":"page"},{"location":"solvers/gradient_descent/#Manopt.RecordGradient","page":"Gradient Descent","title":"Manopt.RecordGradient","text":"RecordGradient <: RecordAction\n\nrecord the gradient evaluated at the current iterate\n\nConstructors\n\nRecordGradient(ξ)\n\ninitialize the RecordAction to the corresponding type of the tangent vector.\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.RecordGradientNorm","page":"Gradient Descent","title":"Manopt.RecordGradientNorm","text":"RecordGradientNorm <: RecordAction\n\nrecord the norm of the current gradient\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#Manopt.RecordStepsize","page":"Gradient Descent","title":"Manopt.RecordStepsize","text":"RecordStepsize <: RecordAction\n\nrecord the step size\n\n\n\n\n\n","category":"type"},{"location":"solvers/gradient_descent/#sec-gradient-descent-technical-details","page":"Gradient Descent","title":"Technical details","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"The gradient_descent solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nBy default gradient descent uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).\nBy default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/gradient_descent/#Literature","page":"Gradient Descent","title":"Literature","text":"","category":"section"},{"location":"solvers/gradient_descent/","page":"Gradient Descent","title":"Gradient Descent","text":"D. G. Luenberger. The gradient projection method along geodesics. Management Science 18, 620–631 (1972).\n\n\n\nH. Zhang and S. Sra. Towards Riemannian accelerated gradient methods, arXiv Preprint, 1806.02812 (2018).\n\n\n\n","category":"page"},{"location":"solvers/#Available-solvers-in-Manopt.jl","page":"List of Solvers","title":"Available solvers in Manopt.jl","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Optimisation problems can be classified with respect to several criteria. The following list of the algorithms is a grouped with respect to the “information” available about a optimisation problem","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"operatorname*argmin_pmathbb M f(p)","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Within each group short notes on advantages of the individual solvers, and required properties the cost f should have, are provided. In that list a 🏅 is used to indicate state-of-the-art solvers, that usually perform best in their corresponding group and 🫏 for a maybe not so fast, maybe not so state-of-the-art method, that nevertheless gets the job done most reliably.","category":"page"},{"location":"solvers/#Derivative-free","page":"List of Solvers","title":"Derivative free","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"For derivative free only function evaluations of f are used.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Nelder-Mead a simplex based variant, that is using d+1 points, where d is the dimension of the manifold.\nParticle Swarm 🫏 use the evolution of a set of points, called swarm, to explore the domain of the cost and find a minimizer.\nCMA-ES uses a stochastic evolutionary strategy to perform minimization robust to local minima of the objective.","category":"page"},{"location":"solvers/#First-order","page":"List of Solvers","title":"First order","text":"","category":"section"},{"location":"solvers/#Gradient","page":"List of Solvers","title":"Gradient","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Gradient Descent uses the gradient from f to determine a descent direction. Here, the direction can also be changed to be Averaged, Momentum-based, based on Nesterovs rule.\nConjugate Gradient Descent uses information from the previous descent direction to improve the current (gradient-based) one including several such update rules.\nThe Quasi-Newton Method 🏅 uses gradient evaluations to approximate the Hessian, which is then used in a Newton-like scheme, where both a limited memory and a full Hessian approximation are available with several different update rules.\nSteihaug-Toint Truncated Conjugate-Gradient Method a solver for a constrained problem defined on a tangent space.","category":"page"},{"location":"solvers/#Subgradient","page":"List of Solvers","title":"Subgradient","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The following methods require the Riemannian subgradient f to be available. While the subgradient might be set-valued, the function should provide one of the subgradients.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The Subgradient Method takes the negative subgradient as a step direction and can be combined with a step size.\nThe Convex Bundle Method (CBM) uses a former collection of sub gradients at the previous iterates and iterate candidates to solve a local approximation to f in every iteration by solving a quadratic problem in the tangent space.\nThe Proximal Bundle Method works similar to CBM, but solves a proximal map-based problem in every iteration.","category":"page"},{"location":"solvers/#Second-order","page":"List of Solvers","title":"Second order","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Adaptive Regularisation with Cubics 🏅 locally builds a cubic model to determine the next descent direction.\nThe Riemannian Trust-Regions Solver builds a quadratic model within a trust region to determine the next descent direction.","category":"page"},{"location":"solvers/#Splitting-based","page":"List of Solvers","title":"Splitting based","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"For splitting methods, the algorithms are based on splitting the cost into different parts, usually in a sum of two or more summands. This is usually very well tailored for non-smooth objectives.","category":"page"},{"location":"solvers/#Smooth","page":"List of Solvers","title":"Smooth","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The following methods require that the splitting, for example into several summands, is smooth in the sense that for every summand of the cost, the gradient should still exist everywhere","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Levenberg-Marquardt minimizes the square norm of f mathcal Mℝ^d provided the gradients of the component functions, or in other words the Jacobian of f.\nStochastic Gradient Descent is based on a splitting of f into a sum of several components f_i whose gradients are provided. Steps are performed according to gradients of randomly selected components.\nThe Alternating Gradient Descent alternates gradient descent steps on the components of the product manifold. All these components should be smooth as it is required, that the gradient exists, and is (locally) convex.","category":"page"},{"location":"solvers/#Nonsmooth","page":"List of Solvers","title":"Nonsmooth","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"If the gradient does not exist everywhere, that is if the splitting yields summands that are nonsmooth, usually methods based on proximal maps are used.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The Chambolle-Pock algorithm uses a splitting f(p) = F(p) + G(Λ(p)), where G is defined on a manifold mathcal N and the proximal map of its Fenchel dual is required. Both these functions can be non-smooth.\nThe Cyclic Proximal Point 🫏 uses proximal maps of the functions from splitting f into summands f_i\nDifference of Convex Algorithm (DCA) uses a splitting of the (non-convex) function f = g - h into a difference of two functions; for each of these it is required to have access to the gradient of g and the subgradient of h to state a sub problem in every iteration to be solved.\nDifference of Convex Proximal Point uses a splitting of the (non-convex) function f = g - h into a difference of two functions; provided the proximal map of g and the subgradient of h, the next iterate is computed. Compared to DCA, the corresponding sub problem is here written in a form that yields the proximal map.\nDouglas—Rachford uses a splitting f(p) = F(x) + G(x) and their proximal maps to compute a minimizer of f, which can be non-smooth.\nPrimal-dual Riemannian semismooth Newton Algorithm extends Chambolle-Pock and requires the differentials of the proximal maps additionally.\nThe Proximal Point uses the proximal map of f iteratively.","category":"page"},{"location":"solvers/#Constrained","page":"List of Solvers","title":"Constrained","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Constrained problems of the form","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"beginalign*\noperatorname*argmin_pmathbb M f(p)\ntextsuch that g(p) leq 0h(p) = 0\nendalign*","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"For these you can use","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The Augmented Lagrangian Method (ALM), where both g and grad_g as well as h and grad_h are keyword arguments, and one of these pairs is mandatory.\nThe Exact Penalty Method (EPM) uses a penalty term instead of augmentation, but has the same interface as ALM.\nThe Interior Point Newton Method (IPM) rephrases the KKT system of a constrained problem into an Newton iteration being performed in every iteration.\nFrank-Wolfe algorithm, where besides the gradient of f either a closed form solution or a (maybe even automatically generated) sub problem solver for operatorname*argmin_q C operatornamegrad f(p_k) log_p_kq is required, where p_k is a fixed point on the manifold (changed in every iteration).","category":"page"},{"location":"solvers/#On-the-tangent-space","page":"List of Solvers","title":"On the tangent space","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Conjugate Residual a solver for a linear system mathcal AX + b = 0 on a tangent space.\nSteihaug-Toint Truncated Conjugate-Gradient Method a solver for a constrained problem defined on a tangent space.","category":"page"},{"location":"solvers/#Alphabetical-list-List-of-algorithms","page":"List of Solvers","title":"Alphabetical list List of algorithms","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Solver Function State\nAdaptive Regularisation with Cubics adaptive_regularization_with_cubics AdaptiveRegularizationState\nAugmented Lagrangian Method augmented_Lagrangian_method AugmentedLagrangianMethodState\nChambolle-Pock ChambollePock ChambollePockState\nConjugate Gradient Descent conjugate_gradient_descent ConjugateGradientDescentState\nConjugate Residual conjugate_residual ConjugateResidualState\nConvex Bundle Method convex_bundle_method ConvexBundleMethodState\nCyclic Proximal Point cyclic_proximal_point CyclicProximalPointState\nDifference of Convex Algorithm difference_of_convex_algorithm DifferenceOfConvexState\nDifference of Convex Proximal Point difference_of_convex_proximal_point DifferenceOfConvexProximalState\nDouglas—Rachford DouglasRachford DouglasRachfordState\nExact Penalty Method exact_penalty_method ExactPenaltyMethodState\nFrank-Wolfe algorithm Frank_Wolfe_method FrankWolfeState\nGradient Descent gradient_descent GradientDescentState\nInterior Point Newton interior_point_Newton \nLevenberg-Marquardt LevenbergMarquardt LevenbergMarquardtState\nNelder-Mead NelderMead NelderMeadState\nParticle Swarm particle_swarm ParticleSwarmState\nPrimal-dual Riemannian semismooth Newton Algorithm primal_dual_semismooth_Newton PrimalDualSemismoothNewtonState\nProximal Bundle Method proximal_bundle_method ProximalBundleMethodState\nProximal Point proximal_point ProximalPointState\nQuasi-Newton Method quasi_Newton QuasiNewtonState\nSteihaug-Toint Truncated Conjugate-Gradient Method truncated_conjugate_gradient_descent TruncatedConjugateGradientState\nSubgradient Method subgradient_method SubGradientMethodState\nStochastic Gradient Descent stochastic_gradient_descent StochasticGradientDescentState\nRiemannian Trust-Regions trust_regions TrustRegionsState","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Note that the solvers (their AbstractManoptSolverState, to be precise) can also be decorated to enhance your algorithm by general additional properties, see debug output and recording values. This is done using the debug= and record= keywords in the function calls. Similarly, a cache= keyword is available in any of the function calls, that wraps the AbstractManoptProblem in a cache for certain parts of the objective.","category":"page"},{"location":"solvers/#Technical-details","page":"List of Solvers","title":"Technical details","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The main function a solver calls is","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"solve!(p::AbstractManoptProblem, s::AbstractManoptSolverState)","category":"page"},{"location":"solvers/#Manopt.solve!-Tuple{AbstractManoptProblem, AbstractManoptSolverState}","page":"List of Solvers","title":"Manopt.solve!","text":"solve!(p::AbstractManoptProblem, s::AbstractManoptSolverState)\n\nrun the solver implemented for the AbstractManoptProblemp and the AbstractManoptSolverStates employing initialize_solver!, step_solver!, as well as the stop_solver! of the solver.\n\n\n\n\n\n","category":"method"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"which is a framework that you in general should not change or redefine. It uses the following methods, which also need to be implemented on your own algorithm, if you want to provide one.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"initialize_solver!\nstep_solver!\nget_solver_result\nget_solver_return\nstop_solver!(p::AbstractManoptProblem, s::AbstractManoptSolverState, Any)","category":"page"},{"location":"solvers/#Manopt.initialize_solver!","page":"List of Solvers","title":"Manopt.initialize_solver!","text":"initialize_solver!(ams::AbstractManoptProblem, amp::AbstractManoptSolverState)\n\nInitialize the solver to the optimization AbstractManoptProblem amp by initializing the necessary values in the AbstractManoptSolverState amp.\n\n\n\n\n\ninitialize_solver!(amp::AbstractManoptProblem, dss::DebugSolverState)\n\nExtend the initialization of the solver by a hook to run the DebugAction that was added to the :Start entry of the debug lists. All others are triggered (with iteration number 0) to trigger possible resets\n\n\n\n\n\ninitialize_solver!(ams::AbstractManoptProblem, rss::RecordSolverState)\n\nExtend the initialization of the solver by a hook to run records that were added to the :Start entry.\n\n\n\n\n\n","category":"function"},{"location":"solvers/#Manopt.step_solver!","page":"List of Solvers","title":"Manopt.step_solver!","text":"step_solver!(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, k)\n\nDo one iteration step (the ith) for an AbstractManoptProblemp by modifying the values in the AbstractManoptSolverState ams.\n\n\n\n\n\nstep_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)\n\nExtend the ith step of the solver by a hook to run debug prints, that were added to the :BeforeIteration and :Iteration entries of the debug lists.\n\n\n\n\n\nstep_solver!(amp::AbstractManoptProblem, rss::RecordSolverState, k)\n\nExtend the ith step of the solver by a hook to run records, that were added to the :Iteration entry.\n\n\n\n\n\n","category":"function"},{"location":"solvers/#Manopt.get_solver_result","page":"List of Solvers","title":"Manopt.get_solver_result","text":"get_solver_result(ams::AbstractManoptSolverState)\nget_solver_result(tos::Tuple{AbstractManifoldObjective,AbstractManoptSolverState})\nget_solver_result(o::AbstractManifoldObjective, s::AbstractManoptSolverState)\n\nReturn the final result after all iterations that is stored within the AbstractManoptSolverState ams, which was modified during the iterations.\n\nFor the case the objective is passed as well, but default, the objective is ignored, and the solver result for the state is called.\n\n\n\n\n\n","category":"function"},{"location":"solvers/#Manopt.get_solver_return","page":"List of Solvers","title":"Manopt.get_solver_return","text":"get_solver_return(s::AbstractManoptSolverState)\nget_solver_return(o::AbstractManifoldObjective, s::AbstractManoptSolverState)\n\ndetermine the result value of a call to a solver. By default this returns the same as get_solver_result.\n\nget_solver_return(s::ReturnSolverState)\nget_solver_return(o::AbstractManifoldObjective, s::ReturnSolverState)\n\nreturn the internally stored state of the ReturnSolverState instead of the minimizer. This means that when the state are decorated like this, the user still has to call get_solver_result on the internal state separately.\n\nget_solver_return(o::ReturnManifoldObjective, s::AbstractManoptSolverState)\n\nreturn both the objective and the state as a tuple.\n\n\n\n\n\n","category":"function"},{"location":"solvers/#Manopt.stop_solver!-Tuple{AbstractManoptProblem, AbstractManoptSolverState, Any}","page":"List of Solvers","title":"Manopt.stop_solver!","text":"stop_solver!(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, k)\n\ndepending on the current AbstractManoptProblem amp, the current state of the solver stored in AbstractManoptSolverState ams and the current iterate i this function determines whether to stop the solver, which by default means to call the internal StoppingCriterion. ams.stop\n\n\n\n\n\n","category":"method"},{"location":"solvers/#API-for-solvers","page":"List of Solvers","title":"API for solvers","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"this is a short overview of the different types of high-level functions are usually available for a solver. Assume the solver is called new_solver and requires a cost f and some first order information df as well as a starting point p on M. f and df form the objective together called obj.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Then there are basically two different variants to call","category":"page"},{"location":"solvers/#The-easy-to-access-call","page":"List of Solvers","title":"The easy to access call","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"new_solver(M, f, df, p=rand(M); kwargs...)\nnew_solver!(M, f, df, p; kwargs...)","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Where the start point should be optional. Keyword arguments include the type of evaluation, decorators like debug= or record= as well as algorithm specific ones. If you provide an immutable point p or the rand(M) point is immutable, like on the Circle() this method should turn the point into a mutable one as well.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"The third variant works in place of p, so it is mandatory.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"This first interface would set up the objective and pass all keywords on the objective based call.","category":"page"},{"location":"solvers/#Objective-based-calls-to-solvers","page":"List of Solvers","title":"Objective based calls to solvers","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"new_solver(M, obj, p=rand(M); kwargs...)\nnew_solver!(M, obj, p; kwargs...)","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Here the objective would be created beforehand for example to compare different solvers on the same objective, and for the first variant the start point is optional. Keyword arguments include decorators like debug= or record= as well as algorithm specific ones.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"This variant would generate the problem and the state and verify validity of all provided keyword arguments that affect the state. Then it would call the iterate process.","category":"page"},{"location":"solvers/#Manual-calls","page":"List of Solvers","title":"Manual calls","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"If you generate the corresponding problem and state as the previous step does, you can also use the third (lowest level) and just call","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"solve!(problem, state)","category":"page"},{"location":"solvers/#Closed-form-subsolvers","page":"List of Solvers","title":"Closed-form subsolvers","text":"","category":"section"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"If a subsolver solution is available in closed form, ClosedFormSubSolverState is used to indicate that.","category":"page"},{"location":"solvers/","page":"List of Solvers","title":"List of Solvers","text":"Manopt.ClosedFormSubSolverState","category":"page"},{"location":"solvers/#Manopt.ClosedFormSubSolverState","page":"List of Solvers","title":"Manopt.ClosedFormSubSolverState","text":"ClosedFormSubSolverState{E<:AbstractEvaluationType} <: AbstractManoptSolverState\n\nSubsolver state indicating that a closed-form solution is available with AbstractEvaluationType E.\n\nConstructor\n\nClosedFormSubSolverState(; evaluation=AllocatingEvaluation())\n\n\n\n\n\n","category":"type"},{"location":"extensions/#Extensions","page":"Extensions","title":"Extensions","text":"","category":"section"},{"location":"extensions/#LineSearches.jl","page":"Extensions","title":"LineSearches.jl","text":"","category":"section"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt can be used with line search algorithms implemented in LineSearches.jl. This can be illustrated by the following example of optimizing Rosenbrock function constrained to the unit sphere.","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"using Manopt, Manifolds, LineSearches\n\n# define objective function and its gradient\np = [1.0, 100.0]\nfunction rosenbrock(::AbstractManifold, x)\n val = zero(eltype(x))\n for i in 1:(length(x) - 1)\n val += (p[1] - x[i])^2 + p[2] * (x[i + 1] - x[i]^2)^2\n end\n return val\nend\nfunction rosenbrock_grad!(M::AbstractManifold, storage, x)\n storage .= 0.0\n for i in 1:(length(x) - 1)\n storage[i] += -2.0 * (p[1] - x[i]) - 4.0 * p[2] * (x[i + 1] - x[i]^2) * x[i]\n storage[i + 1] += 2.0 * p[2] * (x[i + 1] - x[i]^2)\n end\n project!(M, storage, x, storage)\n return storage\nend\n# define constraint\nn_dims = 5\nM = Manifolds.Sphere(n_dims)\n# set initial point\nx0 = vcat(zeros(n_dims - 1), 1.0)\n# use LineSearches.jl HagerZhang method with Manopt.jl quasiNewton solver\nls_hz = Manopt.LineSearchesStepsize(M, LineSearches.HagerZhang())\nx_opt = quasi_Newton(\n M,\n rosenbrock,\n rosenbrock_grad!,\n x0;\n stepsize=ls_hz,\n evaluation=InplaceEvaluation(),\n stopping_criterion=StopAfterIteration(1000) | StopWhenGradientNormLess(1e-6),\n return_state=true,\n)","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"In general this defines the following new stepsize","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt.LineSearchesStepsize","category":"page"},{"location":"extensions/#Manopt.LineSearchesStepsize","page":"Extensions","title":"Manopt.LineSearchesStepsize","text":"LineSearchesStepsize <: Stepsize\n\nWrapper for line searches available in the LineSearches.jl library.\n\nConstructors\n\nLineSearchesStepsize(M::AbstractManifold, linesearch; kwargs...\nLineSearchesStepsize(\n linesearch;\n retraction_method=ExponentialRetraction(),\n vector_transport_method=ParallelTransport(),\n)\n\nWrap linesearch (for example HagerZhang or MoreThuente). The initial step selection from Linesearches.jl is not yet supported and the value 1.0 is used.\n\nKeyword Arguments\n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"extensions/#Manifolds.jl","page":"Extensions","title":"Manifolds.jl","text":"","category":"section"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Loading Manifolds.jl introduces the following additional functions","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt.max_stepsize(::FixedRankMatrices, ::Any)\nManopt.max_stepsize(::Hyperrectangle, ::Any)\nManopt.max_stepsize(::TangentBundle, ::Any)\nmid_point","category":"page"},{"location":"extensions/#Manopt.max_stepsize-Tuple{FixedRankMatrices, Any}","page":"Extensions","title":"Manopt.max_stepsize","text":"max_stepsize(M::FixedRankMatrices, p)\n\nReturn a reasonable guess of maximum step size on FixedRankMatrices following the choice of typical distance in Matlab Manopt, the dimension of M. See this note\n\n\n\n\n\n","category":"method"},{"location":"extensions/#Manopt.max_stepsize-Tuple{Hyperrectangle, Any}","page":"Extensions","title":"Manopt.max_stepsize","text":"max_stepsize(M::Hyperrectangle, p)\n\nThe default maximum stepsize for Hyperrectangle manifold with corners is maximum of distances from p to each boundary.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#Manopt.max_stepsize-Tuple{FiberBundle{𝔽, ManifoldsBase.TangentSpaceType, M} where {𝔽, M<:AbstractManifold{𝔽}}, Any}","page":"Extensions","title":"Manopt.max_stepsize","text":"max_stepsize(M::TangentBundle, p)\n\nTangent bundle has injectivity radius of either infinity (for flat manifolds) or 0 (for non-flat manifolds). This makes a guess of what a reasonable maximum stepsize on a tangent bundle might be.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#ManifoldsBase.mid_point","page":"Extensions","title":"ManifoldsBase.mid_point","text":"mid_point(M, p, q, x)\nmid_point!(M, y, p, q, x)\n\nCompute the mid point between p and q. If there is more than one mid point of (not necessarily minimizing) geodesics (for example on the sphere), the one nearest to x is returned (in place of y).\n\n\n\n\n\n","category":"function"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Internally, Manopt.jl provides the two additional functions to choose some Euclidean space when needed as","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt.Rn\nManopt.Rn_default","category":"page"},{"location":"extensions/#Manopt.Rn","page":"Extensions","title":"Manopt.Rn","text":"Rn(args; kwargs...)\nRn(s::Symbol=:Manifolds, args; kwargs...)\n\nA small internal helper function to choose a Euclidean space. By default, this uses the DefaultManifold unless you load a more advanced Euclidean space like Euclidean from Manifolds.jl\n\n\n\n\n\n","category":"function"},{"location":"extensions/#Manopt.Rn_default","page":"Extensions","title":"Manopt.Rn_default","text":"Rn_default()\n\nSpecify a default value to dispatch Rn on. This default is set to Manifolds, indicating, that when this package is loded, it is the preferred package to ask for a vector space space.\n\nThe default within Manopt.jl is to use the DefaultManifold from ManifoldsBase.jl. If you load Manifolds.jl this switches to using Euclidan.\n\n\n\n\n\n","category":"function"},{"location":"extensions/#JuMP.jl","page":"Extensions","title":"JuMP.jl","text":"","category":"section"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt can be used using the JuMP.jl interface. The manifold is provided in the @variable macro. Note that until now, only variables (points on manifolds) are supported, that are arrays, especially structs do not yet work. The algebraic expression of the objective function is specified in the @objective macro. The descent_state_type attribute specifies the solver.","category":"page"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"using JuMP, Manopt, Manifolds\nmodel = Model(Manopt.Optimizer)\n# Change the solver with this option, `GradientDescentState` is the default\nset_attribute(\"descent_state_type\", GradientDescentState)\n@variable(model, U[1:2, 1:2] in Stiefel(2, 2), start = 1.0)\n@objective(model, Min, sum((A - U) .^ 2))\noptimize!(model)\nsolution_summary(model)","category":"page"},{"location":"extensions/#Interface-functions","page":"Extensions","title":"Interface functions","text":"","category":"section"},{"location":"extensions/","page":"Extensions","title":"Extensions","text":"Manopt.JuMP_ArrayShape\nManopt.JuMP_VectorizedManifold\nMOI.dimension(::Manopt.JuMP_VectorizedManifold)\nManopt.JuMP_Optimizer\nMOI.empty!(::Manopt.JuMP_Optimizer)\nMOI.supports(::Manopt.JuMP_Optimizer, ::MOI.RawOptimizerAttribute)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.RawOptimizerAttribute)\nMOI.set(::Manopt.JuMP_Optimizer, ::MOI.RawOptimizerAttribute, ::Any)\nMOI.supports_incremental_interface(::Manopt.JuMP_Optimizer)\nMOI.copy_to(::Manopt.JuMP_Optimizer, ::MOI.ModelLike)\nMOI.supports_add_constrained_variables(::Manopt.JuMP_Optimizer, ::Type{<:Manopt.JuMP_VectorizedManifold})\nMOI.add_constrained_variables(::Manopt.JuMP_Optimizer, ::Manopt.JuMP_VectorizedManifold)\nMOI.is_valid(model::Manopt.JuMP_Optimizer, ::MOI.VariableIndex)\nMOI.get(model::Manopt.JuMP_Optimizer, ::MOI.NumberOfVariables)\nMOI.supports(::Manopt.JuMP_Optimizer, ::MOI.VariablePrimalStart, ::Type{MOI.VariableIndex})\nMOI.set(::Manopt.JuMP_Optimizer, ::MOI.VariablePrimalStart, ::MOI.VariableIndex, ::Union{Real,Nothing})\nMOI.set(::Manopt.JuMP_Optimizer, ::MOI.ObjectiveSense, ::MOI.OptimizationSense)\nMOI.set(::Manopt.JuMP_Optimizer, ::MOI.ObjectiveFunction{F}, ::F) where {F}\nMOI.supports(::Manopt.JuMP_Optimizer, ::Union{MOI.ObjectiveSense,MOI.ObjectiveFunction})\nJuMP.build_variable(::Function, ::Any, ::Manopt.AbstractManifold)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.ResultCount)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.SolverName)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.ObjectiveValue)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.PrimalStatus)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.DualStatus)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.TerminationStatus)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.SolverVersion)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.ObjectiveSense)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.VariablePrimal, ::MOI.VariableIndex)\nMOI.get(::Manopt.JuMP_Optimizer, ::MOI.RawStatusString)","category":"page"},{"location":"extensions/#Manopt.JuMP_ArrayShape","page":"Extensions","title":"Manopt.JuMP_ArrayShape","text":"struct ArrayShape{N} <: JuMP.AbstractShape\n\nShape of an Array{T,N} of size size.\n\n\n\n\n\n","category":"type"},{"location":"extensions/#Manopt.JuMP_VectorizedManifold","page":"Extensions","title":"Manopt.JuMP_VectorizedManifold","text":"struct VectorizedManifold{M} <: MOI.AbstractVectorSet\n manifold::M\nend\n\nRepresentation of points of manifold as a vector of R^n where n is MOI.dimension(VectorizedManifold(manifold)).\n\n\n\n\n\n","category":"type"},{"location":"extensions/#MathOptInterface.dimension-Tuple{ManoptJuMPExt.VectorizedManifold}","page":"Extensions","title":"MathOptInterface.dimension","text":"MOI.dimension(set::VectorizedManifold)\n\nReturn the representation side of points on the (vectorized in representation) manifold. As the MOI variables are real, this means if the representation_size yields (in product) n, this refers to the vectorized point / tangent vector from (a subset of ℝ^n).\n\n\n\n\n\n","category":"method"},{"location":"extensions/#Manopt.JuMP_Optimizer","page":"Extensions","title":"Manopt.JuMP_Optimizer","text":"Manopt.JuMP_Optimizer()\n\nCreates a new optimizer object for the MathOptInterface (MOI). An alias Manopt.JuMP_Optimizer is defined for convenience.\n\nThe minimization of a function f(X) of an array X[1:n1,1:n2,...] over a manifold M starting at X0, can be modeled as follows:\n\nusing JuMP\nmodel = Model(Manopt.JuMP_Optimizer)\n@variable(model, X[i1=1:n1,i2=1:n2,...] in M, start = X0[i1,i2,...])\n@objective(model, Min, f(X))\n\nThe optimizer assumes that M has a Array shape described by ManifoldsBase.representation_size.\n\n\n\n\n\n","category":"type"},{"location":"extensions/#MathOptInterface.empty!-Tuple{ManoptJuMPExt.Optimizer}","page":"Extensions","title":"MathOptInterface.empty!","text":"MOI.empty!(model::ManoptJuMPExt.Optimizer)\n\nClear all model data from model but keep the options set.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.supports-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.RawOptimizerAttribute}","page":"Extensions","title":"MathOptInterface.supports","text":"MOI.supports(::Optimizer, attr::MOI.RawOptimizerAttribute)\n\nReturn a Bool indicating whether attr.name is a valid option name for Manopt.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.RawOptimizerAttribute}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, attr::MOI.RawOptimizerAttribute)\n\nReturn last value set by MOI.set(model, attr, value).\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.set-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.RawOptimizerAttribute, Any}","page":"Extensions","title":"MathOptInterface.set","text":"MOI.get(model::Optimizer, attr::MOI.RawOptimizerAttribute)\n\nSet the value for the keyword argument attr.name to give for the constructor model.options[DESCENT_STATE_TYPE].\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.supports_incremental_interface-Tuple{ManoptJuMPExt.Optimizer}","page":"Extensions","title":"MathOptInterface.supports_incremental_interface","text":"MOI.supports_incremental_interface(::JuMP_Optimizer)\n\nReturn true indicating that Manopt.JuMP_Optimizer implements MOI.add_constrained_variables and MOI.set for MOI.ObjectiveFunction so it can be used with JuMP.direct_model and does not require a MOI.Utilities.CachingOptimizer. See MOI.supports_incremental_interface.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.copy_to-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ModelLike}","page":"Extensions","title":"MathOptInterface.copy_to","text":"MOI.copy_to(dest::Optimizer, src::MOI.ModelLike)\n\nBecause supports_incremental_interface(dest) is true, this simply uses MOI.Utilities.default_copy_to and copies the variables with MOI.add_constrained_variables and the objective sense with MOI.set.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.supports_add_constrained_variables-Tuple{ManoptJuMPExt.Optimizer, Type{<:ManoptJuMPExt.VectorizedManifold}}","page":"Extensions","title":"MathOptInterface.supports_add_constrained_variables","text":"MOI.supports_add_constrained_variables(::JuMP_Optimizer, ::Type{<:VectorizedManifold})\n\nReturn true indicating that Manopt.JuMP_Optimizer support optimization on variables constrained to belong in a vectorized manifold Manopt.JuMP_VectorizedManifold.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.add_constrained_variables-Tuple{ManoptJuMPExt.Optimizer, ManoptJuMPExt.VectorizedManifold}","page":"Extensions","title":"MathOptInterface.add_constrained_variables","text":"MOI.add_constrained_variables(model::Optimizer, set::VectorizedManifold)\n\nAdd MOI.dimension(set) variables constrained in set and return the list of variable indices that can be used to reference them as well a constraint index for the constraint enforcing the membership of the variables in the Manopt.JuMP_VectorizedManifold set.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.is_valid-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.VariableIndex}","page":"Extensions","title":"MathOptInterface.is_valid","text":"MOI.is_valid(model::Optimizer, vi::MOI.VariableIndex)\n\nReturn whether vi is a valid variable index.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.NumberOfVariables}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, ::MOI.NumberOfVariables)\n\nReturn the number of variables added in the model, this corresponds to the MOI.dimension of the Manopt.JuMP_VectorizedManifold.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.supports-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.VariablePrimalStart, Type{MathOptInterface.VariableIndex}}","page":"Extensions","title":"MathOptInterface.supports","text":"MOI.supports(::Manopt.JuMP_Optimizer, attr::MOI.RawOptimizerAttribute)\n\nReturn true indicating that Manopt.JuMP_Optimizer supports starting values for the variables.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.set-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.VariablePrimalStart, MathOptInterface.VariableIndex, Union{Nothing, Real}}","page":"Extensions","title":"MathOptInterface.set","text":"function MOI.set(\n model::Optimizer,\n ::MOI.VariablePrimalStart,\n vi::MOI.VariableIndex,\n value::Union{Real,Nothing},\n)\n\nSet the starting value of the variable of index vi to value. Note that if value is nothing then it essentially unset any previous starting values set and hence MOI.optimize! unless another starting value is set.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.set-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ObjectiveSense, MathOptInterface.OptimizationSense}","page":"Extensions","title":"MathOptInterface.set","text":"MOI.set(model::Optimizer, ::MOI.ObjectiveSense, sense::MOI.OptimizationSense)\n\nModify the objective sense to either MOI.MAX_SENSE, MOI.MIN_SENSE or MOI.FEASIBILITY_SENSE.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.set-Union{Tuple{F}, Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ObjectiveFunction{F}, F}} where F","page":"Extensions","title":"MathOptInterface.set","text":"MOI.set(model::Optimizer, ::MOI.ObjectiveFunction{F}, func::F) where {F}\n\nSet the objective function as func for model.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.supports-Tuple{ManoptJuMPExt.Optimizer, Union{MathOptInterface.ObjectiveSense, MathOptInterface.ObjectiveFunction}}","page":"Extensions","title":"MathOptInterface.supports","text":"MOI.supports(::Optimizer, ::Union{MOI.ObjectiveSense,MOI.ObjectiveFunction})\n\nReturn true indicating that Optimizer supports being set the objective sense (that is, min, max or feasibility) and the objective function.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#JuMP.build_variable-Tuple{Function, Any, AbstractManifold}","page":"Extensions","title":"JuMP.build_variable","text":"JuMP.build_variable(::Function, func, m::ManifoldsBase.AbstractManifold)\n\nBuild a JuMP.VariablesConstrainedOnCreation object containing variables and the Manopt.JuMP_VectorizedManifold in which they should belong as well as the shape that can be used to go from the vectorized MOI representation to the shape of the manifold, that is, Manopt.JuMP_ArrayShape.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ResultCount}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, ::MOI.ResultCount)\n\nReturn 0 if optimize! hasn't been called yet and 1 otherwise indicating that one solution is available.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.SolverName}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(::Optimizer, ::MOI.SolverName)\n\nReturn the name of the Optimizer with the value of the descent_state_type option.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ObjectiveValue}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, attr::MOI.ObjectiveValue)\n\nReturn the value of the objective function evaluated at the solution.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.PrimalStatus}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, ::MOI.PrimalStatus)\n\nReturn MOI.NO_SOLUTION if optimize! hasn't been called yet and MOI.FEASIBLE_POINT otherwise indicating that a solution is available to query with MOI.VariablePrimalStart.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.DualStatus}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(::Optimizer, ::MOI.DualStatus)\n\nReturns MOI.NO_SOLUTION indicating that there is no dual solution available.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.TerminationStatus}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, ::MOI.ResultCount)\n\nReturn MOI.OPTIMIZE_NOT_CALLED if optimize! hasn't been called yet and MOI.LOCALLY_SOLVED otherwise indicating that the solver has solved the problem to local optimality the value of MOI.RawStatusString for more details on why the solver stopped.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.SolverVersion}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(::Optimizer, ::MOI.SolverVersion)\n\nReturn the version of the Manopt solver, it corresponds to the version of Manopt.jl.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.ObjectiveSense}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.set(model::Optimizer, ::MOI.ObjectiveSense, sense::MOI.OptimizationSense)\n\nReturn the objective sense, defaults to MOI.FEASIBILITY_SENSE if no sense has already been set.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.VariablePrimal, MathOptInterface.VariableIndex}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, attr::MOI.VariablePrimal, vi::MOI.VariableIndex)\n\nReturn the value of the solution for the variable of index vi.\n\n\n\n\n\n","category":"method"},{"location":"extensions/#MathOptInterface.get-Tuple{ManoptJuMPExt.Optimizer, MathOptInterface.RawStatusString}","page":"Extensions","title":"MathOptInterface.get","text":"MOI.get(model::Optimizer, ::MOI.RawStatusString)\n\nReturn a String containing Manopt.get_reason without the ending newline character.\n\n\n\n\n\n","category":"method"},{"location":"tutorials/ImplementOwnManifold/#Optimize-on-your-own-manifold","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"CurrentModule = Manopt","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"When you have used a few solvers from Manopt.jl for example like in the opening tutorial 🏔️ Get started: optimize! and also familiarized yourself with how to work with manifolds in general at 🚀 Get Started with Manifolds.jl, you might come across the point that you want to implementing a manifold yourself and use it within Manopt.jl. A challenge might be, which functions are necessary, since the overall interface of ManifoldsBase.jl is maybe not completely necessary.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"This tutorial aims to help you through these steps to implement necessary parts of a manifold to get started with the solver you have in mind.","category":"page"},{"location":"tutorials/ImplementOwnManifold/#An-example-problem","page":"Optimize on your own manifold","title":"An example problem","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We get started by loading the packages we need.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"using LinearAlgebra, Manifolds, ManifoldsBase, Random\nusing Manopt\nRandom.seed!(42)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We also define the same manifold as in the implementing a manifold tutorial.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"\"\"\"\n ScaledSphere <: AbstractManifold{ℝ}\n\nDefine a sphere of fixed radius\n\n# Fields\n\n* `dimension` dimension of the sphere\n* `radius` the radius of the sphere\n\n# Constructor\n\n ScaledSphere(dimension,radius)\n\nInitialize the manifold to a certain `dimension` and `radius`,\nwhich by default is set to `1.0`\n\"\"\"\nstruct ScaledSphere <: AbstractManifold{ℝ}\n dimension::Int\n radius::Float64\nend","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We would like to compute a mean and/or median similar to 🏔️ Get started: optimize!. For given a set of points q_1ldotsq_n we want to compute [Kar77]","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":" operatorname*argmin_pmathcal M\n frac12n sum_i=1^n d_mathcal M^2(p q_i)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"On the ScaledSphere we just defined. We define a few parameters first","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"d = 5 # dimension of the sphere - embedded in R^{d+1}\nr = 2.0 # radius of the sphere\nN = 100 # data set size\n\nM = ScaledSphere(d,r)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"ScaledSphere(5, 2.0)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"If we generate a few points","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"# generate 100 points around the north pole\npts = [ [zeros(d)..., M.radius] .+ 0.5.*([rand(d)...,0.5] .- 0.5) for _=1:N]\n# project them onto the r-sphere\npts = [ r/norm(p) .* p for p in pts]","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Then, before starting with optimization, we need the distance on the manifold, to define the cost function, as well as the logarithmic map to defined the gradient. For both, we here use the “lazy” approach of using the Sphere as a fallback. Finally, we have to provide information about how points and tangent vectors are stored on the manifold by implementing their representation_size function, which is often required when allocating memory. While we could","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"import ManifoldsBase: distance, log, representation_size\nfunction distance(M::ScaledSphere, p, q)\n return M.radius * distance(Sphere(M.dimension), p ./ M.radius, q ./ M.radius)\nend\nfunction log(M::ScaledSphere, p, q)\n return M.radius * log(Sphere(M.dimension), p ./ M.radius, q ./ M.radius)\nend\nrepresentation_size(M::ScaledSphere) = (M.dimension+1,)","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Define-the-cost-and-gradient","page":"Optimize on your own manifold","title":"Define the cost and gradient","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"f(M, q) = sum(distance(M, q, p)^2 for p in pts)\ngrad_f(M,q) = sum( - log(M, q, p) for p in pts)","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Defining-the-necessary-functions-to-run-a-solver","page":"Optimize on your own manifold","title":"Defining the necessary functions to run a solver","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"The documentation usually lists the necessary functions in a section “Technical Details” close to the end of the documentation of a solver, for our case that is The gradient descent’s Technical Details,","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"They list all details, but we can start even step by step here if we are a bit careful.","category":"page"},{"location":"tutorials/ImplementOwnManifold/#A-retraction","page":"Optimize on your own manifold","title":"A retraction","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We first implement a retraction. Informally, given a current point and a direction to “walk into” we need a function that performs that walk. Since we take an easy one that just projects onto the sphere, we use the ProjectionRetraction type. To be precise, we have to implement the in-place variant retract_project!","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"import ManifoldsBase: retract_project!\nfunction retract_project!(M::ScaledSphere, q, p, X, t::Number)\n q .= p .+ t .* X\n q .*= M.radius / norm(q)\n return q\nend","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"retract_project! (generic function with 19 methods)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"The other two technical remarks refer to the step size and the stopping criterion, so if we set these to something simpler, we should already be able to do a first run.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We have to specify","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"that we want to use the new retraction,\na simple step size and stopping criterion","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We start with a certain point of cost","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"p0 = [zeros(d)...,1.0]\nf(M,p0)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"444.60374551157634","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Then we can run our first solver, where we have to overwrite a few defaults, which would use functions we do not (yet) have. Let’s discuss these in the next steps.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"q1 = gradient_descent(M, f, grad_f, p0;\n retraction_method = ProjectionRetraction(), # state, that we use the retraction from above\n stepsize = DecreasingLength(M; length=1.0), # A simple step size\n stopping_criterion = StopAfterIteration(10), # A simple stopping criterion\n X = zeros(d+1), # how we define/represent tangent vectors\n)\nf(M,q1)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"162.4000287847332","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We at least see, that the function value decreased.","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Norm-and-maximal-step-size","page":"Optimize on your own manifold","title":"Norm and maximal step size","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"To use more advanced stopping criteria and step sizes we first need an inner(M, p, X). We also need a max_stepsize(M), to avoid having too large steps on positively curved manifolds like our scaled sphere in this example","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"import ManifoldsBase: inner\nimport Manopt: max_stepsize\ninner(M::ScaledSphere, p, X,Y) = dot(X,Y) # inherited from the embedding\n # set the maximal allowed stepsize to injectivity radius.\nManopt.max_stepsize(M::ScaledSphere) = M.radius*π","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Then we can use the default step size (ArmijoLinesearch) and the default stopping criterion, which checks for a small gradient Norm","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"q2 = gradient_descent(M, f, grad_f, p0;\n retraction_method = ProjectionRetraction(), # as before\n X = zeros(d+1), # as before\n)\nf(M, q2)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"9.772830131357034","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Making-life-easier:-default-retraction-and-zero-vector","page":"Optimize on your own manifold","title":"Making life easier: default retraction and zero vector","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"To initialize tangent vector memory, the function zero_vector(M,p) is called. Similarly, the most-used retraction is returned by default_retraction_method","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"We can use both here, to make subsequent calls to the solver less verbose. We define","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"import ManifoldsBase: zero_vector, default_retraction_method\nzero_vector(M::ScaledSphere, p) = zeros(M.dimension+1)\ndefault_retraction_method(M::ScaledSphere) = ProjectionRetraction()","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"default_retraction_method (generic function with 19 methods)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"and now we can even just call","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"q3 = gradient_descent(M, f, grad_f, p0)\nf(M, q3)","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"9.772830131357034","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"But we for example automatically also get the possibility to obtain debug information like","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"gradient_descent(M, f, grad_f, p0; debug = [:Iteration, :Cost, :Stepsize, 25, :GradientNorm, :Stop, \"\\n\"]);","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Initial f(x): 444.603746\n# 25 f(x): 9.772833s:0.018299583806109226|grad f(p)|:0.020516914880881486\n# 50 f(x): 9.772830s:0.018299583806109226|grad f(p)|:0.00013449321419330018\nThe algorithm reached approximately critical point after 72 iterations; the gradient norm (9.20733514568335e-9) is less than 1.0e-8.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"see How to Print Debug Output for more details.","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Technical-details","page":"Optimize on your own manifold","title":"Technical details","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `..`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"2024-11-21T20:39:21.777","category":"page"},{"location":"tutorials/ImplementOwnManifold/#Literature","page":"Optimize on your own manifold","title":"Literature","text":"","category":"section"},{"location":"tutorials/ImplementOwnManifold/","page":"Optimize on your own manifold","title":"Optimize on your own manifold","text":"H. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).\n\n\n\n","category":"page"},{"location":"solvers/subgradient/#sec-subgradient-method","page":"Subgradient method","title":"Subgradient method","text":"","category":"section"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"subgradient_method\nsubgradient_method!","category":"page"},{"location":"solvers/subgradient/#Manopt.subgradient_method","page":"Subgradient method","title":"Manopt.subgradient_method","text":"subgradient_method(M, f, ∂f, p=rand(M); kwargs...)\nsubgradient_method(M, sgo, p=rand(M); kwargs...)\nsubgradient_method!(M, f, ∂f, p; kwargs...)\nsubgradient_method!(M, sgo, p; kwargs...)\n\nperform a subgradient method p^(k+1) = operatornameretrbigl(p^(k) s^(k)f(p^(k))bigr), where operatornameretr is a retraction, s^(k) is a step size.\n\nThough the subgradient might be set valued, the argument ∂f should always return one element from the subgradient, but not necessarily deterministic. For more details see [FO98].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: the (sub)gradient f mathcal M Tmathcal M of f\np: a point on the manifold mathcal M\n\nalternatively to f and ∂f a ManifoldSubgradientObjective sgo can be provided.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=default_stepsize(M, SubGradientMethodState): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nand the ones that are passed to decorate_state! for decorators.\n\nOutput\n\nthe obtained (approximate) minimizer p^*, see get_solver_return for details\n\n\n\n\n\n","category":"function"},{"location":"solvers/subgradient/#Manopt.subgradient_method!","page":"Subgradient method","title":"Manopt.subgradient_method!","text":"subgradient_method(M, f, ∂f, p=rand(M); kwargs...)\nsubgradient_method(M, sgo, p=rand(M); kwargs...)\nsubgradient_method!(M, f, ∂f, p; kwargs...)\nsubgradient_method!(M, sgo, p; kwargs...)\n\nperform a subgradient method p^(k+1) = operatornameretrbigl(p^(k) s^(k)f(p^(k))bigr), where operatornameretr is a retraction, s^(k) is a step size.\n\nThough the subgradient might be set valued, the argument ∂f should always return one element from the subgradient, but not necessarily deterministic. For more details see [FO98].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: the (sub)gradient f mathcal M Tmathcal M of f\np: a point on the manifold mathcal M\n\nalternatively to f and ∂f a ManifoldSubgradientObjective sgo can be provided.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=default_stepsize(M, SubGradientMethodState): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nand the ones that are passed to decorate_state! for decorators.\n\nOutput\n\nthe obtained (approximate) minimizer p^*, see get_solver_return for details\n\n\n\n\n\n","category":"function"},{"location":"solvers/subgradient/#State","page":"Subgradient method","title":"State","text":"","category":"section"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"SubGradientMethodState","category":"page"},{"location":"solvers/subgradient/#Manopt.SubGradientMethodState","page":"Subgradient method","title":"Manopt.SubGradientMethodState","text":"SubGradientMethodState <: AbstractManoptSolverState\n\nstores option values for a subgradient_method solver\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\np_star: optimal value\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nX: the current element from the possible subgradients at p that was last evaluated.\n\nConstructor\n\nSubGradientMethodState(M::AbstractManifold; kwargs...)\n\nInitialise the Subgradient method state\n\nKeyword arguments\n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstepsize=default_stepsize(M, SubGradientMethodState): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\n\n\n\n\n","category":"type"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"For DebugActions and RecordActions to record (sub)gradient, its norm and the step sizes, see the gradient descent actions.","category":"page"},{"location":"solvers/subgradient/#sec-sgm-technical-details","page":"Subgradient method","title":"Technical details","text":"","category":"section"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"The subgradient_method solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.","category":"page"},{"location":"solvers/subgradient/#Literature","page":"Subgradient method","title":"Literature","text":"","category":"section"},{"location":"solvers/subgradient/","page":"Subgradient method","title":"Subgradient method","text":"O. Ferreira and P. R. Oliveira. Subgradient algorithm on Riemannian manifolds. Journal of Optimization Theory and Applications 97, 93–104 (1998).\n\n\n\n","category":"page"},{"location":"solvers/augmented_Lagrangian_method/#Augmented-Lagrangian-method","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian method","text":"","category":"section"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":" augmented_Lagrangian_method\n augmented_Lagrangian_method!","category":"page"},{"location":"solvers/augmented_Lagrangian_method/#Manopt.augmented_Lagrangian_method","page":"Augmented Lagrangian Method","title":"Manopt.augmented_Lagrangian_method","text":"augmented_Lagrangian_method(M, f, grad_f, p=rand(M); kwargs...)\naugmented_Lagrangian_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\naugmented_Lagrangian_method!(M, f, grad_f, p; kwargs...)\naugmented_Lagrangian_method!(M, cmo::ConstrainedManifoldObjective, p; kwargs...)\n\nperform the augmented Lagrangian method (ALM) [LB19]. This method can work in-place of p.\n\nThe aim of the ALM is to find the solution of the constrained optimisation task\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nwhere M is a Riemannian manifold, and f, g_i_i=1^n and h_j_j=1^m are twice continuously differentiable functions from M to ℝ. In every step k of the algorithm, the AugmentedLagrangianCost mathcal L_ρ^(k)(p μ^(k) λ^(k)) is minimized on \\mathcal M, where μ^(k) ℝ^n and λ^(k) ℝ^m are the current iterates of the Lagrange multipliers and ρ^(k) is the current penalty parameter.\n\nThe Lagrange multipliers are then updated by\n\nλ_j^(k+1) =operatornameclip_λ_minλ_max (λ_j^(k) + ρ^(k) h_j(p^(k+1))) textfor all j=1p\n\nand\n\nμ_i^(k+1) =operatornameclip_0μ_max (μ_i^(k) + ρ^(k) g_i(p^(k+1))) text for all i=1m\n\nwhere λ_textmin λ_textmax and μ_textmax are the multiplier boundaries.\n\nNext, the accuracy tolerance ϵ is updated as\n\nϵ^(k)=maxϵ_min θ_ϵ ϵ^(k-1)\n\nwhere ϵ_textmin is the lowest value ϵ is allowed to become and θ_ϵ (01) is constant scaling factor.\n\nLast, the penalty parameter ρ is updated as follows: with\n\nσ^(k)=max_j=1p i=1m h_j(p^(k)) max_i=1mg_i(p^(k)) -fracμ_i^(k-1)ρ^(k-1) \n\nρ is updated as\n\nρ^(k) = begincases\nρ^(k-1)θ_ρ textif σ^(k)leq θ_ρ σ^(k-1) \nρ^(k-1) textelse\nendcases\n\nwhere θ_ρ (01) is a constant scaling factor.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\n\nOptional (if not called with the ConstrainedManifoldObjective cmo)\n\ng=nothing: the inequality constraints\nh=nothing: the equality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\n\nNote that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.\n\nKeyword Arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nϵ=1e-3: the accuracy tolerance\nϵ_min=1e-6: the lower bound for the accuracy tolerance\nϵ_exponent=1/100: exponent of the ϵ update factor; also 1/number of iterations until maximal accuracy is needed to end algorithm naturally\nequality_constraints=nothing: the number n of equality constraints.\nIf not provided, a call to the gradient of g is performed to estimate these.\ngradient_range=nothing: specify how both gradients of the constraints are represented\ngradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.\ngradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.\ninequality_constraints=nothing: the number m of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.\nλ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the equality constraints\nλ_max=20.0: an upper bound for the Lagrange multiplier belonging to the equality constraints\nλ_min=- λ_max: a lower bound for the Lagrange multiplier belonging to the equality constraints\nμ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the inequality constraints\nμ_max=20.0: an upper bound for the Lagrange multiplier belonging to the inequality constraints\nρ=1.0: the penalty parameter\nτ=0.8: factor for the improvement of the evaluation of the penalty parameter\nθ_ρ=0.3: the scaling factor of the penalty parameter\nθ_ϵ=(ϵ_min / ϵ)^(ϵ_exponent): the scaling factor of the exactness\nsub_cost=[AugmentedLagrangianCost± (@ref)(cmo, ρ, μ, λ): use augmented Lagrangian cost, based on the ConstrainedManifoldObjective build from the functions provided. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_grad=[AugmentedLagrangianGrad](@ref)(cmo, ρ, μ, λ): use augmented Lagrangian gradient, based on the [ConstrainedManifoldObjective](@ref) build from the functions provided. This is used to define thesubproblem=keyword and has hence no effect, if you setsubproblem` directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nstopping_criterion=StopAfterIteration(300)|(`StopWhenSmallerOrEqual(:ϵ, ϵ_min)&StopWhenChangeLess(1e-10) )[ | ](@ref StopWhenAny)[StopWhenChangeLess](@ref): a functor indicating that the stopping criterion is fulfilled\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.as the quasi newton method, the QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used.\n`substoppingcriterion::StoppingCriterion=StopAfterIteration(300) | StopWhenGradientNormLess(ϵ) | StopWhenStepsizeLess(1e-8),\n\nFor the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/augmented_Lagrangian_method/#Manopt.augmented_Lagrangian_method!","page":"Augmented Lagrangian Method","title":"Manopt.augmented_Lagrangian_method!","text":"augmented_Lagrangian_method(M, f, grad_f, p=rand(M); kwargs...)\naugmented_Lagrangian_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\naugmented_Lagrangian_method!(M, f, grad_f, p; kwargs...)\naugmented_Lagrangian_method!(M, cmo::ConstrainedManifoldObjective, p; kwargs...)\n\nperform the augmented Lagrangian method (ALM) [LB19]. This method can work in-place of p.\n\nThe aim of the ALM is to find the solution of the constrained optimisation task\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nwhere M is a Riemannian manifold, and f, g_i_i=1^n and h_j_j=1^m are twice continuously differentiable functions from M to ℝ. In every step k of the algorithm, the AugmentedLagrangianCost mathcal L_ρ^(k)(p μ^(k) λ^(k)) is minimized on \\mathcal M, where μ^(k) ℝ^n and λ^(k) ℝ^m are the current iterates of the Lagrange multipliers and ρ^(k) is the current penalty parameter.\n\nThe Lagrange multipliers are then updated by\n\nλ_j^(k+1) =operatornameclip_λ_minλ_max (λ_j^(k) + ρ^(k) h_j(p^(k+1))) textfor all j=1p\n\nand\n\nμ_i^(k+1) =operatornameclip_0μ_max (μ_i^(k) + ρ^(k) g_i(p^(k+1))) text for all i=1m\n\nwhere λ_textmin λ_textmax and μ_textmax are the multiplier boundaries.\n\nNext, the accuracy tolerance ϵ is updated as\n\nϵ^(k)=maxϵ_min θ_ϵ ϵ^(k-1)\n\nwhere ϵ_textmin is the lowest value ϵ is allowed to become and θ_ϵ (01) is constant scaling factor.\n\nLast, the penalty parameter ρ is updated as follows: with\n\nσ^(k)=max_j=1p i=1m h_j(p^(k)) max_i=1mg_i(p^(k)) -fracμ_i^(k-1)ρ^(k-1) \n\nρ is updated as\n\nρ^(k) = begincases\nρ^(k-1)θ_ρ textif σ^(k)leq θ_ρ σ^(k-1) \nρ^(k-1) textelse\nendcases\n\nwhere θ_ρ (01) is a constant scaling factor.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\n\nOptional (if not called with the ConstrainedManifoldObjective cmo)\n\ng=nothing: the inequality constraints\nh=nothing: the equality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\n\nNote that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.\n\nKeyword Arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nϵ=1e-3: the accuracy tolerance\nϵ_min=1e-6: the lower bound for the accuracy tolerance\nϵ_exponent=1/100: exponent of the ϵ update factor; also 1/number of iterations until maximal accuracy is needed to end algorithm naturally\nequality_constraints=nothing: the number n of equality constraints.\nIf not provided, a call to the gradient of g is performed to estimate these.\ngradient_range=nothing: specify how both gradients of the constraints are represented\ngradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.\ngradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.\ninequality_constraints=nothing: the number m of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.\nλ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the equality constraints\nλ_max=20.0: an upper bound for the Lagrange multiplier belonging to the equality constraints\nλ_min=- λ_max: a lower bound for the Lagrange multiplier belonging to the equality constraints\nμ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the inequality constraints\nμ_max=20.0: an upper bound for the Lagrange multiplier belonging to the inequality constraints\nρ=1.0: the penalty parameter\nτ=0.8: factor for the improvement of the evaluation of the penalty parameter\nθ_ρ=0.3: the scaling factor of the penalty parameter\nθ_ϵ=(ϵ_min / ϵ)^(ϵ_exponent): the scaling factor of the exactness\nsub_cost=[AugmentedLagrangianCost± (@ref)(cmo, ρ, μ, λ): use augmented Lagrangian cost, based on the ConstrainedManifoldObjective build from the functions provided. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_grad=[AugmentedLagrangianGrad](@ref)(cmo, ρ, μ, λ): use augmented Lagrangian gradient, based on the [ConstrainedManifoldObjective](@ref) build from the functions provided. This is used to define thesubproblem=keyword and has hence no effect, if you setsubproblem` directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nstopping_criterion=StopAfterIteration(300)|(`StopWhenSmallerOrEqual(:ϵ, ϵ_min)&StopWhenChangeLess(1e-10) )[ | ](@ref StopWhenAny)[StopWhenChangeLess](@ref): a functor indicating that the stopping criterion is fulfilled\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.as the quasi newton method, the QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used.\n`substoppingcriterion::StoppingCriterion=StopAfterIteration(300) | StopWhenGradientNormLess(ϵ) | StopWhenStepsizeLess(1e-8),\n\nFor the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/augmented_Lagrangian_method/#State","page":"Augmented Lagrangian Method","title":"State","text":"","category":"section"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"AugmentedLagrangianMethodState","category":"page"},{"location":"solvers/augmented_Lagrangian_method/#Manopt.AugmentedLagrangianMethodState","page":"Augmented Lagrangian Method","title":"Manopt.AugmentedLagrangianMethodState","text":"AugmentedLagrangianMethodState{P,T} <: AbstractManoptSolverState\n\nDescribes the augmented Lagrangian method, with\n\nFields\n\na default value is given in brackets if a parameter can be left out in initialization.\n\nϵ: the accuracy tolerance\nϵ_min: the lower bound for the accuracy tolerance\nλ: the Lagrange multiplier with respect to the equality constraints\nλ_max: an upper bound for the Lagrange multiplier belonging to the equality constraints\nλ_min: a lower bound for the Lagrange multiplier belonging to the equality constraints\np::P: a point on the manifold mathcal Mstoring the current iterate\npenalty: evaluation of the current penalty term, initialized to Inf.\nμ: the Lagrange multiplier with respect to the inequality constraints\nμ_max: an upper bound for the Lagrange multiplier belonging to the inequality constraints\nρ: the penalty parameter\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nτ: factor for the improvement of the evaluation of the penalty parameter\nθ_ρ: the scaling factor of the penalty parameter\nθ_ϵ: the scaling factor of the accuracy tolerance\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\n\nConstructor\n\nAugmentedLagrangianMethodState(M::AbstractManifold, co::ConstrainedManifoldObjective,\n sub_problem, sub_state; kwargs...\n)\n\nconstruct an augmented Lagrangian method options, where the manifold M and the ConstrainedManifoldObjective co are used for manifold- or objective specific defaults.\n\nAugmentedLagrangianMethodState(M::AbstractManifold, co::ConstrainedManifoldObjective,\n sub_problem; evaluation=AllocatingEvaluation(), kwargs...\n)\n\nconstruct an augmented Lagrangian method options, where the manifold M and the ConstrainedManifoldObjective co are used for manifold- or objective specific defaults, and sub_problem is a closed form solution with evaluation as type of evaluation.\n\nKeyword arguments\n\nthe following keyword arguments are available to initialise the corresponding fields\n\nϵ=1e–3\nϵ_min=1e-6\nλ=ones(n): n is the number of equality constraints in the ConstrainedManifoldObjective co.\nλ_max=20.0\nλ_min=- λ_max\nμ=ones(m): m is the number of inequality constraints in the ConstrainedManifoldObjective co.\nμ_max=20.0\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nρ=1.0\nτ=0.8\nθ_ρ=0.3\nθ_ϵ=(ϵ_min/ϵ)^(ϵ_exponent)\nstoppingcriterion=StopAfterIteration(300)|(`StopWhenSmallerOrEqual`(:ϵ, ϵmin)[ & ](@ref StopWhenAll)[StopWhenChangeLess](@ref)(1e-10) )|StopWhenChangeLess`.\n\nSee also\n\naugmented_Lagrangian_method\n\n\n\n\n\n","category":"type"},{"location":"solvers/augmented_Lagrangian_method/#Helping-functions","page":"Augmented Lagrangian Method","title":"Helping functions","text":"","category":"section"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"AugmentedLagrangianCost\nAugmentedLagrangianGrad","category":"page"},{"location":"solvers/augmented_Lagrangian_method/#Manopt.AugmentedLagrangianCost","page":"Augmented Lagrangian Method","title":"Manopt.AugmentedLagrangianCost","text":"AugmentedLagrangianCost{CO,R,T}\n\nStores the parameters ρ ℝ, μ ℝ^m, λ ℝ^n of the augmented Lagrangian associated to the ConstrainedManifoldObjective co.\n\nThis struct is also a functor (M,p) -> v that can be used as a cost function within a solver, based on the internal ConstrainedManifoldObjective it computes\n\nmathcal L_rho(p μ λ)\n= f(x) + fracρ2 biggl(\n sum_j=1^n Bigl( h_j(p) + fracλ_jρ Bigr)^2\n +\n sum_i=1^m maxBigl 0 fracμ_iρ + g_i(p) Bigr^2\nBigr)\n\nFields\n\nco::CO, ρ::R, μ::T, λ::T as mentioned in the formula, where R should be the\n\nnumber type used and T the vector type.\n\nConstructor\n\nAugmentedLagrangianCost(co, ρ, μ, λ)\n\n\n\n\n\n","category":"type"},{"location":"solvers/augmented_Lagrangian_method/#Manopt.AugmentedLagrangianGrad","page":"Augmented Lagrangian Method","title":"Manopt.AugmentedLagrangianGrad","text":"AugmentedLagrangianGrad{CO,R,T} <: AbstractConstrainedFunctor{T}\n\nStores the parameters ρ ℝ, μ ℝ^m, λ ℝ^n of the augmented Lagrangian associated to the ConstrainedManifoldObjective co.\n\nThis struct is also a functor in both formats\n\n(M, p) -> X to compute the gradient in allocating fashion.\n(M, X, p) to compute the gradient in in-place fashion.\n\nadditionally this gradient does accept a positional last argument to specify the range for the internal gradient call of the constrained objective.\n\nbased on the internal ConstrainedManifoldObjective and computes the gradient $(_tex(:grad))$(_tex(:Cal, \"L\"))_{ρ}(p, μ, λ), see also [AugmentedLagrangianCost`](@ref).\n\nFields\n\nco::CO, ρ::R, μ::T, λ::T as mentioned in the formula, where R should be the\n\nnumber type used and T the vector type.\n\nConstructor\n\nAugmentedLagrangianGrad(co, ρ, μ, λ)\n\n\n\n\n\n","category":"type"},{"location":"solvers/augmented_Lagrangian_method/#sec-agd-technical-details","page":"Augmented Lagrangian Method","title":"Technical details","text":"","category":"section"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"The augmented_Lagrangian_method solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"A `copyto!(M, q, p) and copy(M,p) for points.\nEverything the subsolver requires, which by default is the quasi_Newton method\nA zero_vector(M,p).","category":"page"},{"location":"solvers/augmented_Lagrangian_method/#Literature","page":"Augmented Lagrangian Method","title":"Literature","text":"","category":"section"},{"location":"solvers/augmented_Lagrangian_method/","page":"Augmented Lagrangian Method","title":"Augmented Lagrangian Method","text":"C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.\n\n\n\n","category":"page"},{"location":"solvers/cma_es/#Covariance-matrix-adaptation-evolutionary-strategy","page":"CMA-ES","title":"Covariance matrix adaptation evolutionary strategy","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"The CMA-ES algorithm has been implemented based on [Han23] with basic Riemannian adaptations, related to transport of covariance matrix and its update vectors. Other attempts at adapting CMA-ES to Riemannian optimization include [CFFS10]. The algorithm is suitable for global optimization.","category":"page"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"Covariance matrix transport between consecutive mean points is handled by eigenvector_transport! function which is based on the idea of transport of matrix eigenvectors.","category":"page"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"cma_es","category":"page"},{"location":"solvers/cma_es/#Manopt.cma_es","page":"CMA-ES","title":"Manopt.cma_es","text":"cma_es(M, f, p_m=rand(M); σ::Real=1.0, kwargs...)\n\nPerform covariance matrix adaptation evolutionary strategy search for global gradient-free randomized optimization. It is suitable for complicated non-convex functions. It can be reasonably expected to find global minimum within 3σ distance from p_m.\n\nImplementation is based on [Han23] with basic adaptations to the Riemannian setting.\n\nInput\n\nM: a manifold mathcal M\nf: a cost function f mathcal Mℝ to find a minimizer p^* for\n\nKeyword arguments\n\np_m=rand(M): an initial point p\nσ=1.0: initial standard deviation\nλ: (4 + Int(floor(3 * log(manifold_dimension(M))))population size (can be increased for a more thorough global search but decreasing is not recommended)\ntol_fun=1e-12: tolerance for the StopWhenPopulationCostConcentrated, similar to absolute difference between function values at subsequent points\ntol_x=1e-12: tolerance for the StopWhenPopulationStronglyConcentrated, similar to absolute difference between subsequent point but actually computed from distribution parameters.\nstopping_criterion=default_cma_es_stopping_criterion(M, λ; tol_fun=tol_fun, tol_x=tol_x): a functor indicating that the stopping criterion is fulfilled\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nbasis (DefaultOrthonormalBasis()) basis used to represent covariance in\nrng=default_rng(): random number generator for generating new points on M\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/cma_es/#State","page":"CMA-ES","title":"State","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"CMAESState","category":"page"},{"location":"solvers/cma_es/#Manopt.CMAESState","page":"CMA-ES","title":"Manopt.CMAESState","text":"CMAESState{P,T} <: AbstractManoptSolverState\n\nState of covariance matrix adaptation evolution strategy.\n\nFields\n\np::P: a point on the manifold mathcal M storing the best point found so far\np_obj objective value at p\nμ parent number\nλ population size\nμ_eff variance effective selection mass for the mean\nc_1 learning rate for the rank-one update\nc_c decay rate for cumulation path for the rank-one update\nc_μ learning rate for the rank-μ update\nc_σ decay rate for the cumulation path for the step-size control\nc_m learning rate for the mean\nd_σ damping parameter for step-size update\npopulation population of the current generation\nys_c coordinates of random vectors for the current generation\ncovariance_matrix coordinates of the covariance matrix\ncovariance_matrix_eigen eigen decomposition of covariance_matrix\ncovariance_matrix_cond condition number of covariance_matrix, updated after eigen decomposition\nbest_fitness_current_gen best fitness value of individuals in the current generation\nmedian_fitness_current_gen median fitness value of individuals in the current generation\nworst_fitness_current_gen worst fitness value of individuals in the current generation\np_m point around which the search for new candidates is done\nσ step size\np_σ coordinates of a vector in T_p_mmathcal M\np_c coordinates of a vector in T_p_mmathcal M\ndeviations standard deviations of coordinate RNG\nbuffer buffer for random number generation and wmean_y_c of length n_coords\ne_mv_norm expected value of norm of the n_coords-variable standard normal distribution\nrecombination_weights recombination weights used for updating covariance matrix\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nbasis a real coefficient basis for covariance matrix\nrng RNG for generating new points\n\nConstructor\n\nCMAESState(\n M::AbstractManifold,\n p_m::P,\n μ::Int,\n λ::Int,\n μ_eff::TParams,\n c_1::TParams,\n c_c::TParams,\n c_μ::TParams,\n c_σ::TParams,\n c_m::TParams,\n d_σ::TParams,\n stop::TStopping,\n covariance_matrix::Matrix{TParams},\n σ::TParams,\n recombination_weights::Vector{TParams};\n retraction_method::TRetraction=default_retraction_method(M, typeof(p_m)),\n vector_transport_method::TVTM=default_vector_transport_method(M, typeof(p_m)),\n basis::TB=DefaultOrthonormalBasis(),\n rng::TRng=default_rng(),\n) where {\n P,\n TParams<:Real,\n TStopping<:StoppingCriterion,\n TRetraction<:AbstractRetractionMethod,\n TVTM<:AbstractVectorTransportMethod,\n TB<:AbstractBasis,\n TRng<:AbstractRNG,\n}\n\nSee also\n\ncma_es\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Stopping-criteria","page":"CMA-ES","title":"Stopping criteria","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"StopWhenBestCostInGenerationConstant\nStopWhenCovarianceIllConditioned\nStopWhenEvolutionStagnates\nStopWhenPopulationCostConcentrated\nStopWhenPopulationDiverges\nStopWhenPopulationStronglyConcentrated","category":"page"},{"location":"solvers/cma_es/#Manopt.StopWhenBestCostInGenerationConstant","page":"CMA-ES","title":"Manopt.StopWhenBestCostInGenerationConstant","text":"StopWhenBestCostInGenerationConstant <: StoppingCriterion\n\nStop if the range of the best objective function values of the last iteration_range generations is zero. This corresponds to EqualFUnValues condition from [Han23].\n\nSee also StopWhenPopulationCostConcentrated.\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Manopt.StopWhenCovarianceIllConditioned","page":"CMA-ES","title":"Manopt.StopWhenCovarianceIllConditioned","text":"StopWhenCovarianceIllConditioned <: StoppingCriterion\n\nStop CMA-ES if condition number of covariance matrix exceeds threshold. This corresponds to ConditionCov condition from [Han23].\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Manopt.StopWhenEvolutionStagnates","page":"CMA-ES","title":"Manopt.StopWhenEvolutionStagnates","text":"StopWhenEvolutionStagnates{TParam<:Real} <: StoppingCriterion\n\nThe best and median fitness in each iteration is tracked over the last 20% but at least min_size and no more than max_size iterations. Solver is stopped if in both histories the median of the most recent fraction of values is not better than the median of the oldest fraction.\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Manopt.StopWhenPopulationCostConcentrated","page":"CMA-ES","title":"Manopt.StopWhenPopulationCostConcentrated","text":"StopWhenPopulationCostConcentrated{TParam<:Real} <: StoppingCriterion\n\nStop if the range of the best objective function value in the last max_size generations and all function values in the current generation is below tol. This corresponds to TolFun condition from [Han23].\n\nConstructor\n\nStopWhenPopulationCostConcentrated(tol::Real, max_size::Int)\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Manopt.StopWhenPopulationDiverges","page":"CMA-ES","title":"Manopt.StopWhenPopulationDiverges","text":"StopWhenPopulationDiverges{TParam<:Real} <: StoppingCriterion\n\nStop if σ times maximum deviation increased by more than tol. This usually indicates a far too small σ, or divergent behavior. This corresponds to TolXUp condition from [Han23].\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#Manopt.StopWhenPopulationStronglyConcentrated","page":"CMA-ES","title":"Manopt.StopWhenPopulationStronglyConcentrated","text":"StopWhenPopulationStronglyConcentrated{TParam<:Real} <: StoppingCriterion\n\nStop if the standard deviation in all coordinates is smaller than tol and norm of σ * p_c is smaller than tol. This corresponds to TolX condition from [Han23].\n\nFields\n\ntol the tolerance to verify against\nat_iteration an internal field to indicate at with iteration i geq 0 the tolerance was met.\n\nConstructor\n\nStopWhenPopulationStronglyConcentrated(tol::Real)\n\n\n\n\n\n","category":"type"},{"location":"solvers/cma_es/#sec-cma-es-technical-details","page":"CMA-ES","title":"Technical details","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"The cma_es solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= does not have to be specified.\nA copyto!(M, q, p) and copy(M,p) for points and similarly copy(M, p, X) for tangent vectors.\nget_coordinates!(M, Y, p, X, b) and get_vector!(M, X, p, c, b) with respect to the AbstractBasis b provided, which is DefaultOrthonormalBasis by default from the basis= keyword.\nAn is_flat(M).","category":"page"},{"location":"solvers/cma_es/#Internal-helpers","page":"CMA-ES","title":"Internal helpers","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"You may add new methods to eigenvector_transport! if you know a more optimized implementation for your manifold.","category":"page"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"Manopt.eigenvector_transport!","category":"page"},{"location":"solvers/cma_es/#Manopt.eigenvector_transport!","page":"CMA-ES","title":"Manopt.eigenvector_transport!","text":"eigenvector_transport!(\n M::AbstractManifold,\n matrix_eigen::Eigen,\n p,\n q,\n basis::AbstractBasis,\n vtm::AbstractVectorTransportMethod,\n)\n\nTransport the matrix with matrix_eig eigen decomposition when expanded in basis from point p to point q on M. Update matrix_eigen in-place.\n\n(p, matrix_eig) belongs to the fiber bundle of B = mathcal M SPD(n), where n is the (real) dimension of M. The function corresponds to the Ehresmann connection defined by vector transport vtm of eigenvectors of matrix_eigen.\n\n\n\n\n\n","category":"function"},{"location":"solvers/cma_es/#Literature","page":"CMA-ES","title":"Literature","text":"","category":"section"},{"location":"solvers/cma_es/","page":"CMA-ES","title":"CMA-ES","text":"S. Colutto, F. Fruhauf, M. Fuchs and O. Scherzer. The CMA-ES on Riemannian Manifolds to Reconstruct Shapes in 3-D Voxel Images. IEEE Transactions on Evolutionary Computation 14, 227–245 (2010).\n\n\n\nN. Hansen. The CMA Evolution Strategy: A Tutorial. ArXiv Preprint (2023).\n\n\n\n","category":"page"},{"location":"plans/record/#sec-record","page":"Recording values","title":"Record values","text":"","category":"section"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"To record values during the iterations of a solver run, there are in general two possibilities. On the one hand, the high-level interfaces provide a record= keyword, that accepts several different inputs. For more details see How to record.","category":"page"},{"location":"plans/record/#subsec-record-states","page":"Recording values","title":"Record Actions & the solver state decorator","text":"","category":"section"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"Modules = [Manopt]\nPages = [\"plans/record.jl\"]\nOrder = [:type]","category":"page"},{"location":"plans/record/#Manopt.RecordAction","page":"Recording values","title":"Manopt.RecordAction","text":"RecordAction\n\nA RecordAction is a small functor to record values. The usual call is given by\n\n(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, k) -> s\n\nthat performs the record for the current problem and solver combination, and where k is the current iteration.\n\nBy convention i=0 is interpreted as \"For Initialization only,\" so only initialize internal values, but not trigger any record, that the record is called from within stop_solver! which returns true afterwards.\n\nAny negative value is interpreted as a “reset”, and should hence delete all stored recordings, for example when reusing a RecordAction. The start of a solver calls the :Iteration and :Stop dictionary entries with -1, to reset those recordings.\n\nBy default any RecordAction is assumed to record its values in a field recorded_values, an Vector of recorded values. See get_record(ra).\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordChange","page":"Recording values","title":"Manopt.RecordChange","text":"RecordChange <: RecordAction\n\ndebug for the amount of change of the iterate (see get_iterate(s) of the AbstractManoptSolverState) during the last iteration.\n\nFields\n\nstorage : a StoreStateAction to store (at least) the last iterate to use this as the last value (to compute the change) serving as a potential cache shared with other components of the solver.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nrecorded_values : to store the recorded values\n\nConstructor\n\nRecordChange(M=DefaultManifold();\n inverse_retraction_method = default_inverse_retraction_method(M),\n storage = StoreStateAction(M; store_points=Tuple{:Iterate})\n)\n\nwith the previous fields as keywords. For the DefaultManifold only the field storage is used. Providing the actual manifold moves the default storage to the efficient point storage.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordCost","page":"Recording values","title":"Manopt.RecordCost","text":"RecordCost <: RecordAction\n\nRecord the current cost function value, see get_cost.\n\nFields\n\nrecorded_values : to store the recorded values\n\nConstructor\n\nRecordCost()\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordEntry","page":"Recording values","title":"Manopt.RecordEntry","text":"RecordEntry{T} <: RecordAction\n\nrecord a certain fields entry of type {T} during the iterates\n\nFields\n\nrecorded_values : the recorded Iterates\nfield : Symbol the entry can be accessed with within AbstractManoptSolverState\n\nConstructor\n\nRecordEntry(::T, f::Symbol)\nRecordEntry(T::DataType, f::Symbol)\n\nInitialize the record action to record the state field f, and initialize the recorded_values to be a vector of element type T.\n\nExamples\n\nRecordEntry(rand(M), :q) to record the points from M stored in some states s.q\nRecordEntry(SVDMPoint, :p) to record the field s.p which takes values of type SVDMPoint.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordEntryChange","page":"Recording values","title":"Manopt.RecordEntryChange","text":"RecordEntryChange{T} <: RecordAction\n\nrecord a certain entries change during iterates\n\nAdditional fields\n\nrecorded_values : the recorded Iterates\nfield : Symbol the field can be accessed with within AbstractManoptSolverState\ndistance : function (p,o,x1,x2) to compute the change/distance between two values of the entry\nstorage : a StoreStateAction to store (at least) getproperty(o, d.field)\n\nConstructor\n\nRecordEntryChange(f::Symbol, d, a::StoreStateAction=StoreStateAction([f]))\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordEvery","page":"Recording values","title":"Manopt.RecordEvery","text":"RecordEvery <: RecordAction\n\nrecord only every kth iteration. Otherwise (optionally, but activated by default) just update internal tracking values.\n\nThis method does not perform any record itself but relies on it's children's methods\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordGroup","page":"Recording values","title":"Manopt.RecordGroup","text":"RecordGroup <: RecordAction\n\ngroup a set of RecordActions into one action, where the internal RecordActions act independently, but the results can be collected in a grouped fashion, a tuple per calls of this group. The entries can be later addressed either by index or semantic Symbols\n\nConstructors\n\nRecordGroup(g::Array{<:RecordAction, 1})\n\nconstruct a group consisting of an Array of RecordActions g,\n\nRecordGroup(g, symbols)\n\nExamples\n\ng1 = RecordGroup([RecordIteration(), RecordCost()])\n\nA RecordGroup to record the current iteration and the cost. The cost can then be accessed using get_record(r,2) or r[2].\n\ng2 = RecordGroup([RecordIteration(), RecordCost()], Dict(:Cost => 2))\n\nA RecordGroup to record the current iteration and the cost, which can then be accessed using get_record(:Cost) or r[:Cost].\n\ng3 = RecordGroup([RecordIteration(), RecordCost() => :Cost])\n\nA RecordGroup identical to the previous constructor, just a little easier to use. To access all recordings of the second entry of this last g3 you can do either g4[2] or g[:Cost], the first one can only be accessed by g4[1], since no symbol was given here.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordIterate","page":"Recording values","title":"Manopt.RecordIterate","text":"RecordIterate <: RecordAction\n\nrecord the iterate\n\nConstructors\n\nRecordIterate(x0)\n\ninitialize the iterate record array to the type of x0, which indicates the kind of iterate\n\nRecordIterate(P)\n\ninitialize the iterate record array to the data type T.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordIteration","page":"Recording values","title":"Manopt.RecordIteration","text":"RecordIteration <: RecordAction\n\nrecord the current iteration\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordSolverState","page":"Recording values","title":"Manopt.RecordSolverState","text":"RecordSolverState <: AbstractManoptSolverState\n\nappend to any AbstractManoptSolverState the decorator with record capability, Internally a dictionary is kept that stores a RecordAction for several concurrent modes using a Symbol as reference. The default mode is :Iteration, which is used to store information that is recorded during the iterations. RecordActions might be added to :Start or :Stop to record values at the beginning or for the stopping time point, respectively\n\nThe original options can still be accessed using the get_state function.\n\nFields\n\noptions the options that are extended by debug information\nrecordDictionary a Dict{Symbol,RecordAction} to keep track of all different recorded values\n\nConstructors\n\nRecordSolverState(o,dR)\n\nconstruct record decorated AbstractManoptSolverState, where dR can be\n\na RecordAction, then it is stored within the dictionary at :Iteration\nan Array of RecordActions, then it is stored as a recordDictionary(@ref).\na Dict{Symbol,RecordAction}.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordStoppingReason","page":"Recording values","title":"Manopt.RecordStoppingReason","text":"RecordStoppingReason <: RecordAction\n\nRecord reason the solver stopped, see get_reason.\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordSubsolver","page":"Recording values","title":"Manopt.RecordSubsolver","text":"RecordSubsolver <: RecordAction\n\nRecord the current subsolvers recording, by calling get_record on the sub state with\n\nFields\n\nrecords: an array to store the recorded values\nsymbols: arguments for get_record. Defaults to just one symbol :Iteration, but could be set to also record the :Stop action.\n\nConstructor\n\nRecordSubsolver(; record=[:Iteration,], record_type=eltype([]))\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordTime","page":"Recording values","title":"Manopt.RecordTime","text":"RecordTime <: RecordAction\n\nrecord the time elapsed during the current iteration.\n\nThe three possible modes are\n\n:cumulative record times without resetting the timer\n:iterative record times with resetting the timer\n:total record a time only at the end of an algorithm (see stop_solver!)\n\nThe default is :cumulative, and any non-listed symbol default to using this mode.\n\nConstructor\n\nRecordTime(; mode::Symbol=:cumulative)\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Manopt.RecordWhenActive","page":"Recording values","title":"Manopt.RecordWhenActive","text":"RecordWhenActive <: RecordAction\n\nrecord action that only records if the active boolean is set to true. This can be set from outside and is for example triggered by |RecordEvery](@ref) on recordings of the subsolver. While this is for subsolvers maybe not completely necessary, recording values that are never accessible, is not that useful.\n\nFields\n\nactive: a boolean that can (de-)activated from outside to turn on/off debug\nalways_update: whether or not to call the inner debugs with nonpositive iterates (init/reset)\n\nConstructor\n\nRecordWhenActive(r::RecordAction, active=true, always_update=true)\n\n\n\n\n\n","category":"type"},{"location":"plans/record/#Access-functions","page":"Recording values","title":"Access functions","text":"","category":"section"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"Modules = [Manopt]\nPages = [\"plans/record.jl\"]\nOrder = [:function]\nPublic = true\nPrivate = false","category":"page"},{"location":"plans/record/#Base.getindex-Tuple{RecordGroup, Vararg{Any}}","page":"Recording values","title":"Base.getindex","text":"getindex(r::RecordGroup, s::Symbol)\nr[s]\ngetindex(r::RecordGroup, sT::NTuple{N,Symbol})\nr[sT]\ngetindex(r::RecordGroup, i)\nr[i]\n\nreturn an array of recorded values with respect to the s, the symbols from the tuple sT or the index i. See get_record for details.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Base.getindex-Tuple{RecordSolverState, Symbol}","page":"Recording values","title":"Base.getindex","text":"get_index(rs::RecordSolverState, s::Symbol)\nro[s]\n\nGet the recorded values for recorded type s, see get_record for details.\n\nget_index(rs::RecordSolverState, s::Symbol, i...)\nro[s, i...]\n\nAccess the recording type of type s and call its RecordAction with [i...].\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.get_record","page":"Recording values","title":"Manopt.get_record","text":"get_record(s::AbstractManoptSolverState, [,symbol=:Iteration])\nget_record(s::RecordSolverState, [,symbol=:Iteration])\n\nreturn the recorded values from within the RecordSolverState s that where recorded with respect to the Symbol symbol as an Array. The default refers to any recordings during an :Iteration.\n\nWhen called with arbitrary AbstractManoptSolverState, this method looks for the RecordSolverState decorator and calls get_record on the decorator.\n\n\n\n\n\n","category":"function"},{"location":"plans/record/#Manopt.get_record-Tuple{RecordAction}","page":"Recording values","title":"Manopt.get_record","text":"get_record(r::RecordAction)\n\nreturn the recorded values stored within a RecordAction r.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.get_record-Tuple{RecordGroup}","page":"Recording values","title":"Manopt.get_record","text":"get_record(r::RecordGroup)\n\nreturn an array of tuples, where each tuple is a recorded set per iteration or record call.\n\nget_record(r::RecordGruop, k::Int)\n\nreturn an array of values corresponding to the ith entry in this record group\n\nget_record(r::RecordGruop, s::Symbol)\n\nreturn an array of recorded values with respect to the s, see RecordGroup.\n\nget_record(r::RecordGroup, s1::Symbol, s2::Symbol,...)\n\nreturn an array of tuples, where each tuple is a recorded set corresponding to the symbols s1, s2,... per iteration / record call.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.get_record_action","page":"Recording values","title":"Manopt.get_record_action","text":"get_record_action(s::AbstractManoptSolverState, s::Symbol)\n\nreturn the action contained in the (first) RecordSolverState decorator within the AbstractManoptSolverState o.\n\n\n\n\n\n","category":"function"},{"location":"plans/record/#Manopt.get_record_state-Tuple{AbstractManoptSolverState}","page":"Recording values","title":"Manopt.get_record_state","text":"get_record_state(s::AbstractManoptSolverState)\n\nreturn the RecordSolverState among the decorators from the AbstractManoptSolverState o\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.has_record-Tuple{RecordSolverState}","page":"Recording values","title":"Manopt.has_record","text":"has_record(s::AbstractManoptSolverState)\n\nIndicate whether the AbstractManoptSolverStates are decorated with RecordSolverState\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Internal-factory-functions","page":"Recording values","title":"Internal factory functions","text":"","category":"section"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"Modules = [Manopt]\nPages = [\"plans/record.jl\"]\nOrder = [:function]\nPublic = false\nPrivate = true","category":"page"},{"location":"plans/record/#Manopt.RecordActionFactory-Tuple{AbstractManoptSolverState, RecordAction}","page":"Recording values","title":"Manopt.RecordActionFactory","text":"RecordActionFactory(s::AbstractManoptSolverState, a)\n\ncreate a RecordAction where\n\na RecordAction is passed through\na [Symbol] creates\n:Change to record the change of the iterates, see RecordChange\n:Gradient to record the gradient, see RecordGradient\n:GradientNorm to record the norm of the gradient, see [RecordGradientNorm`](@ref)\n:Iterate to record the iterate\n:Iteration to record the current iteration number\nIterativeTime to record the time iteratively\n:Cost to record the current cost function value\n:Stepsize to record the current step size\n:Time to record the total time taken after every iteration\n:IterativeTime to record the times taken for each iteration.\n\nand every other symbol is passed to RecordEntry, which results in recording the field of the state with the symbol indicating the field of the solver to record.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.RecordActionFactory-Union{Tuple{T}, Tuple{AbstractManoptSolverState, Tuple{Symbol, T}}} where T","page":"Recording values","title":"Manopt.RecordActionFactory","text":"RecordActionFactory(s::AbstractManoptSolverState, t::Tuple{Symbol, T}) where {T}\n\ncreate a RecordAction where\n\n(:Subsolver, s) creates a RecordSubsolver with record= set to the second tuple entry\n\nFor other symbol the second entry is ignored and the symbol is used to generate a RecordEntry recording the field with the name symbol of s.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.RecordFactory-Tuple{AbstractManoptSolverState, Vector}","page":"Recording values","title":"Manopt.RecordFactory","text":"RecordFactory(s::AbstractManoptSolverState, a)\n\nGenerate a dictionary of RecordActions.\n\nFirst all Symbols String, RecordActions and numbers are collected, excluding :Stop and :WhenActive. This collected vector is added to the :Iteration => [...] pair. :Stop is added as :StoppingCriterion to the :Stop => [...] pair. If any of these two pairs does not exist, it is pairs are created when adding the corresponding symbols\n\nFor each Pair of a Symbol and a Vector, the RecordGroupFactory is called for the Vector and the result is added to the debug dictionary's entry with said symbol. This is wrapped into the RecordWhenActive, when the :WhenActive symbol is present\n\nReturn value\n\nA dictionary for the different entry points where debug can happen, each containing a RecordAction to call.\n\nNote that upon the initialisation all dictionaries but the :StartAlgorithm one are called with an i=0 for reset.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.RecordGroupFactory-Tuple{AbstractManoptSolverState, Vector}","page":"Recording values","title":"Manopt.RecordGroupFactory","text":"RecordGroupFactory(s::AbstractManoptSolverState, a)\n\nGenerate a [RecordGroup] of RecordActions. The following rules are used\n\nAny Symbol contained in a is passed to RecordActionFactory\nAny RecordAction is included as is.\n\nAny Pair of a RecordAction and a symbol, that is in order RecordCost() => :A is handled, that the corresponding record action can later be accessed as g[:A], where gis the record group generated here.\n\nIf this results in more than one RecordAction a RecordGroup of these is build.\n\nIf any integers are present, the last of these is used to wrap the group in a RecordEvery(k).\n\nIf :WhenActive is present, the resulting Action is wrapped in RecordWhenActive, making it deactivatable by its parent solver.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.record_or_reset!-Tuple{RecordAction, Any, Int64}","page":"Recording values","title":"Manopt.record_or_reset!","text":"record_or_reset!(r, v, k)\n\neither record (k>0 and not Inf) the value v within the RecordAction r or reset (k<0) the internal storage, where v has to match the internal value type of the corresponding RecordAction.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.set_parameter!-Tuple{RecordSolverState, Val{:Record}, Vararg{Any}}","page":"Recording values","title":"Manopt.set_parameter!","text":"set_parameter!(ams::RecordSolverState, ::Val{:Record}, args...)\n\nSet certain values specified by args... into the elements of the recordDictionary\n\n\n\n\n\n","category":"method"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"Further specific RecordActions can be found when specific types of AbstractManoptSolverState define them on their corresponding site.","category":"page"},{"location":"plans/record/#Technical-details","page":"Recording values","title":"Technical details","text":"","category":"section"},{"location":"plans/record/","page":"Recording values","title":"Recording values","text":"initialize_solver!(amp::AbstractManoptProblem, rss::RecordSolverState)\nstep_solver!(p::AbstractManoptProblem, s::RecordSolverState, k)\nstop_solver!(p::AbstractManoptProblem, s::RecordSolverState, k)","category":"page"},{"location":"plans/record/#Manopt.initialize_solver!-Tuple{AbstractManoptProblem, RecordSolverState}","page":"Recording values","title":"Manopt.initialize_solver!","text":"initialize_solver!(ams::AbstractManoptProblem, rss::RecordSolverState)\n\nExtend the initialization of the solver by a hook to run records that were added to the :Start entry.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.step_solver!-Tuple{AbstractManoptProblem, RecordSolverState, Any}","page":"Recording values","title":"Manopt.step_solver!","text":"step_solver!(amp::AbstractManoptProblem, rss::RecordSolverState, k)\n\nExtend the ith step of the solver by a hook to run records, that were added to the :Iteration entry.\n\n\n\n\n\n","category":"method"},{"location":"plans/record/#Manopt.stop_solver!-Tuple{AbstractManoptProblem, RecordSolverState, Any}","page":"Recording values","title":"Manopt.stop_solver!","text":"stop_solver!(amp::AbstractManoptProblem, rss::RecordSolverStatek k)\n\nExtend the call to the stopping criterion by a hook to run records, that were added to the :Stop entry.\n\n\n\n\n\n","category":"method"},{"location":"tutorials/Optimize/#Get-started:-optimize.","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This tutorial both introduces the basics of optimisation on manifolds as well as how to use Manopt.jl to perform optimisation on manifolds in Julia.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"For more theoretical background, see for example [Car92] for an introduction to Riemannian manifolds and [AMS08] or [Bou23] to read more about optimisation thereon.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Let mathcal M denote a (Riemannian manifold and let f mathcal M ℝ be a cost function. The aim is to determine or obtain a point p^* where f is minimal or in other words p^* is a minimizer of f.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This can also be written as","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":" operatorname*argmin_p mathcal M f(p)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"where the aim is to compute the minimizer p^* numerically. As an example, consider the generalisation of the (arithemtic) mean. In the Euclidean case with dmathbb N, that is for nmathbb N data points y_1ldotsy_n ℝ^d the mean","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":" frac1nsum_i=1^n y_i","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"can not be directly generalised to data q_1ldotsq_n mathcal M, since on a manifold there is no addition available. But the mean can also be characterised as the following minimizer","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":" operatorname*argmin_xℝ^d frac12nsum_i=1^n lVert x - y_irVert^2","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"and using the Riemannian distance d_mathcal M, this can be written on Riemannian manifolds, which is the so called Riemannian Center of Mass [Kar77]","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":" operatorname*argmin_pmathcal M\n frac12n sum_i=1^n d_mathcal M^2(p q_i)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Fortunately the gradient can be computed and is","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":" frac1n sum_i=1^n -log_p q_i","category":"page"},{"location":"tutorials/Optimize/#Loading-the-necessary-packages","page":"🏔️ Get started: optimize.","title":"Loading the necessary packages","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Let’s assume you have already installed both Manopt.jl and Manifolds.jl in Julia (using for example using Pkg; Pkg.add([\"Manopt\", \"Manifolds\"])). Then we can get started by loading both packages as well as Random.jl for persistency in this tutorial.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"using Manopt, Manifolds, Random, LinearAlgebra, ManifoldDiff\nusing ManifoldDiff: grad_distance, prox_distance\nRandom.seed!(42);","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Now assume we are on the Sphere mathcal M = mathbb S^2 and we generate some random points “around” some initial point p","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"n = 100\nσ = π / 8\nM = Sphere(2)\np = 1 / sqrt(2) * [1.0, 0.0, 1.0]\ndata = [exp(M, p, σ * rand(M; vector_at=p)) for i in 1:n];","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Now we can define the cost function f and its (Riemannian) gradient operatornamegrad f for the Riemannian center of mass:","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"f(M, p) = sum(1 / (2 * n) * distance.(Ref(M), Ref(p), data) .^ 2)\ngrad_f(M, p) = sum(1 / n * grad_distance.(Ref(M), data, Ref(p)));","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"and just call gradient_descent. For a first start, we do not have to provide more than the manifold, the cost, the gradient, and a starting point, which we just set to the first data point","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"m1 = gradient_descent(M, f, grad_f, data[1])","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"3-element Vector{Float64}:\n 0.6868392807355564\n 0.006531599748261925\n 0.7267799809043942","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"In order to get more details, we further add the debug= keyword argument, which act as a decorator pattern.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This way we can easily specify a certain debug to be printed. The goal is to get an output of the form","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"# i | Last Change: [...] | F(x): [...] |","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"but where we also want to fix the display format for the change and the cost numbers (the [...]) to have a certain format. Furthermore, the reason why the solver stopped should be printed at the end","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"These can easily be specified using either a Symbol when using the default format for numbers, or a tuple of a symbol and a format-string in the debug= keyword that is available for every solver. We can also, for illustration reasons, just look at the first 6 steps by setting a stopping_criterion=","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"m2 = gradient_descent(M, f, grad_f, data[1];\n debug=[:Iteration,(:Change, \"|Δp|: %1.9f |\"),\n (:Cost, \" F(x): %1.11f | \"), \"\\n\", :Stop],\n stopping_criterion = StopAfterIteration(6)\n )","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Initial F(x): 0.32487988924 | \n# 1 |Δp|: 1.063609017 | F(x): 0.25232524046 | \n# 2 |Δp|: 0.809858671 | F(x): 0.20966960102 | \n# 3 |Δp|: 0.616665145 | F(x): 0.18546505598 | \n# 4 |Δp|: 0.470841764 | F(x): 0.17121604104 | \n# 5 |Δp|: 0.359345690 | F(x): 0.16300825911 | \n# 6 |Δp|: 0.274597420 | F(x): 0.15818548927 | \nThe algorithm reached its maximal number of iterations (6).\n\n3-element Vector{Float64}:\n 0.7533872481682505\n -0.06053107055583637\n 0.6547851890466334","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"See here for the list of available symbols.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"info: Technical Detail\nThe debug= keyword is actually a list of DebugActions added to every iteration, allowing you to write your own ones even. Additionally, :Stop is an action added to the end of the solver to display the reason why the solver stopped.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"The default stopping criterion for gradient_descent is, to either stop when the gradient is small (<1e-9) or a max number of iterations is reached (as a fallback). Combining stopping-criteria can be done by | or &. We further pass a number 25 to debug= to only an output every 25th iteration:","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"m3 = gradient_descent(M, f, grad_f, data[1];\n debug=[:Iteration,(:Change, \"|Δp|: %1.9f |\"),\n (:Cost, \" F(x): %1.11f | \"), \"\\n\", :Stop, 25],\n stopping_criterion = StopWhenGradientNormLess(1e-14) | StopAfterIteration(400),\n)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Initial F(x): 0.32487988924 | \n# 25 |Δp|: 0.459715605 | F(x): 0.15145076374 | \n# 50 |Δp|: 0.000551270 | F(x): 0.15145051509 | \nThe algorithm reached approximately critical point after 73 iterations; the gradient norm (9.988871119384563e-16) is less than 1.0e-14.\n\n3-element Vector{Float64}:\n 0.6868392794788668\n 0.006531600680779286\n 0.7267799820836411","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"We can finally use another way to determine the stepsize, for example a little more expensive ArmijoLineSeach than the default stepsize rule used on the Sphere.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"m4 = gradient_descent(M, f, grad_f, data[1];\n debug=[:Iteration,(:Change, \"|Δp|: %1.9f |\"),\n (:Cost, \" F(x): %1.11f | \"), \"\\n\", :Stop, 2],\n stepsize = ArmijoLinesearch(; contraction_factor=0.999, sufficient_decrease=0.5),\n stopping_criterion = StopWhenGradientNormLess(1e-14) | StopAfterIteration(400),\n)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Initial F(x): 0.32487988924 | \n# 2 |Δp|: 0.001318138 | F(x): 0.15145051509 | \n# 4 |Δp|: 0.000000004 | F(x): 0.15145051509 | \n# 6 |Δp|: 0.000000000 | F(x): 0.15145051509 | \nThe algorithm reached approximately critical point after 7 iterations; the gradient norm (5.073696618059386e-15) is less than 1.0e-14.\n\n3-element Vector{Float64}:\n 0.6868392794788669\n 0.006531600680779358\n 0.7267799820836413","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Then we reach approximately the same point as in the previous run, but in far less steps","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"[f(M, m3)-f(M,m4), distance(M, m3, m4)]","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"2-element Vector{Float64}:\n 1.6653345369377348e-16\n 1.727269835930624e-16","category":"page"},{"location":"tutorials/Optimize/#Using-the-tutorial-mode","page":"🏔️ Get started: optimize.","title":"Using the tutorial mode","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Since a few things on manifolds are a bit different from (classical) Euclidean optimization, Manopt.jl has a mode to warn about a few pitfalls.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"It can be set using","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Manopt.set_parameter!(:Mode, \"Tutorial\")","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"[ Info: Setting the `Manopt.jl` parameter :Mode to Tutorial.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"to activate these. Continuing from the example before, one might argue, that the minimizer of f does not depend on the scaling of the function. In theory this is of course also the case on manifolds, but for the optimizations there is a caveat. When we define the Riemannian mean without the scaling","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"f2(M, p) = sum(1 / 2 * distance.(Ref(M), Ref(p), data) .^ 2)\ngrad_f2(M, p) = sum(grad_distance.(Ref(M), data, Ref(p)));","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"And we consider the gradient at the starting point in norm","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"norm(M, data[1], grad_f2(M, data[1]))","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"57.47318616893399","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"On the sphere, when we follow a geodesic, we “return” to the start point after length 2π. If we “land” short before the starting point due to a gradient of length just shy of 2π, the line search would take the gradient direction (and not the negative gradient direction) as a start. The line search is still performed, but in this case returns a much too small, maybe even nearly zero step size.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"In other words, we have to be careful that the optimisation stays a “local” argument we use.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This is also warned for in \"Tutorial\" mode. Calling","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"mX = gradient_descent(M, f2, grad_f2, data[1])","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"┌ Warning: At iteration #0\n│ the gradient norm (57.47318616893399) is larger that 1.0 times the injectivity radius 3.141592653589793 at the current iterate.\n└ @ Manopt ~/work/Manopt.jl/Manopt.jl/src/plans/debug.jl:1120\n┌ Warning: Further warnings will be suppressed, use DebugWarnIfGradientNormTooLarge(1.0, :Always) to get all warnings.\n└ @ Manopt ~/work/Manopt.jl/Manopt.jl/src/plans/debug.jl:1124\n\n3-element Vector{Float64}:\n 0.6868392794870684\n 0.006531600674920825\n 0.7267799820759485","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"So just by chance it seems we still got nearly the same point as before, but when we for example look when this one stops, that is takes more steps.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"gradient_descent(M, f2, grad_f2, data[1], debug=[:Stop]);","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"The algorithm reached approximately critical point after 140 iterations; the gradient norm (6.807380063106406e-9) is less than 1.0e-8.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This also illustrates one way to deactivate the hints, namely by overwriting the debug= keyword, that in Tutorial mode contains additional warnings. The other option is to globally reset the :Mode back to","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Manopt.set_parameter!(:Mode, \"\")","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"[ Info: Resetting the `Manopt.jl` parameter :Mode to default.","category":"page"},{"location":"tutorials/Optimize/#Example-2:-computing-the-median-of-symmetric-positive-definite-matrices","page":"🏔️ Get started: optimize.","title":"Example 2: computing the median of symmetric positive definite matrices","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"For the second example let’s consider the manifold of 3 3 symmetric positive definite matrices and again 100 random points","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"N = SymmetricPositiveDefinite(3)\nm = 100\nσ = 0.005\nq = Matrix{Float64}(I, 3, 3)\ndata2 = [exp(N, q, σ * rand(N; vector_at=q)) for i in 1:m];","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Instead of the mean, let’s consider a non-smooth optimisation task: the median can be generalized to Manifolds as the minimiser of the sum of distances, see [Bac14]. We define","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"g(N, q) = sum(1 / (2 * m) * distance.(Ref(N), Ref(q), data2))","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"g (generic function with 1 method)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Since the function is non-smooth, we can not use a gradient-based approach. But since for every summand the proximal map is available, we can use the cyclic proximal point algorithm (CPPA). We hence define the vector of proximal maps as","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"proxes_g = Function[(N, λ, q) -> prox_distance(N, λ / m, di, q, 1) for di in data2];","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Besides also looking at a some debug prints, we can also easily record these values. Similarly to debug=, record= also accepts Symbols, see list here, to indicate things to record. We further set return_state to true to obtain not just the (approximate) minimizer.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"res = cyclic_proximal_point(N, g, proxes_g, data2[1];\n debug=[:Iteration,\" | \",:Change,\" | \",(:Cost, \"F(x): %1.12f\"),\"\\n\", 1000, :Stop,\n ],\n record=[:Iteration, :Change, :Cost, :Iterate],\n return_state=true,\n );","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Initial | | F(x): 0.005875512856\n# 1000 | Last Change: 0.003704 | F(x): 0.003239019699\n# 2000 | Last Change: 0.000015 | F(x): 0.003238996105\n# 3000 | Last Change: 0.000005 | F(x): 0.003238991748\n# 4000 | Last Change: 0.000002 | F(x): 0.003238990225\n# 5000 | Last Change: 0.000001 | F(x): 0.003238989520\nThe algorithm reached its maximal number of iterations (5000).","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"note: Technical Detail\nThe recording is realised by RecordActions that are (also) executed at every iteration. These can also be individually implemented and added to the record= array instead of symbols.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"First, the computed median can be accessed as","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"median = get_solver_result(res)","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"3×3 Matrix{Float64}:\n 1.0 2.12236e-5 0.000398721\n 2.12236e-5 1.00044 0.000141798\n 0.000398721 0.000141798 1.00041","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"but we can also look at the recorded values. For simplicity (of output), lets just look at the recorded values at iteration 42","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"get_record(res)[42]","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"(42, 1.0569455860769079e-5, 0.003252547739370045, [0.9998583866917449 0.0002098880312604301 0.0002895445818451581; 0.00020988803126037459 1.0000931572564762 0.0002084371501681892; 0.00028954458184524134 0.0002084371501681892 1.000070920743257])","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"But we can also access whole series and see that the cost does not decrease that fast; actually, the CPPA might converge relatively slow. For that we can for example access the :Cost that was recorded every :Iterate as well as the (maybe a little boring) :Iteration-number in a semi-log-plot.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"x = get_record(res, :Iteration, :Iteration)\ny = get_record(res, :Iteration, :Cost)\nusing Plots\nplot(x,y,xaxis=:log, label=\"CPPA Cost\")","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"(Image: )","category":"page"},{"location":"tutorials/Optimize/#Technical-details","page":"🏔️ Get started: optimize.","title":"Technical details","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `..`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"2024-11-21T20:40:06.134","category":"page"},{"location":"tutorials/Optimize/#Literature","page":"🏔️ Get started: optimize.","title":"Literature","text":"","category":"section"},{"location":"tutorials/Optimize/","page":"🏔️ Get started: optimize.","title":"🏔️ Get started: optimize.","text":"P.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.\n\n\n\nM. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.\n\n\n\nN. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\nM. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.\n\n\n\nH. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).\n\n\n\n","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Adaptive-regularization-with-cubics","page":"Adaptive Regularization with Cubics","title":"Adaptive regularization with cubics","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"adaptive_regularization_with_cubics\nadaptive_regularization_with_cubics!","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.adaptive_regularization_with_cubics","page":"Adaptive Regularization with Cubics","title":"Manopt.adaptive_regularization_with_cubics","text":"adaptive_regularization_with_cubics(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics(M, f, grad_f, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics(M, mho, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics!(M, f, grad_f, Hess_f, p; kwargs...)\nadaptive_regularization_with_cubics!(M, f, grad_f, p; kwargs...)\nadaptive_regularization_with_cubics!(M, mho, p; kwargs...)\n\nSolve an optimization problem on the manifold M by iteratively minimizing\n\nm_k(X) = f(p_k) + X operatornamegrad f(p^(k)) + frac12X operatornameHess f(p^(k))X + fracσ_k3lVert X rVert^3\n\non the tangent space at the current iterate p_k, where X T_p_kmathcal M and σ_k 0 is a regularization parameter.\n\nLet Xp^(k) denote the minimizer of the model m_k and use the model improvement\n\n ρ_k = fracf(p_k) - f(operatornameretr_p_k(X_k))m_k(0) - m_k(X_k) + fracσ_k3lVert X_krVert^3\n\nWith two thresholds η_2 η_1 0 set p_k+1 = operatornameretr_p_k(X_k) if ρ η_1 and reject the candidate otherwise, that is, set p_k+1 = p_k.\n\nFurther update the regularization parameter using factors 0 γ_1 1 γ_2 reads\n\nσ_k+1 =\nbegincases\n maxσ_min γ_1σ_k text if ρ geq η_2 text (the model was very successful)\n σ_k text if ρ η_1 η_2)text (the model was successful)\n γ_2σ_k text if ρ η_1text (the model was unsuccessful)\nendcases\n\nFor more details see [ABBC20].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nthe cost f and its gradient and Hessian might also be provided as a ManifoldHessianObjective\n\nKeyword arguments\n\nσ=100.0 / sqrt(manifold_dimension(M): initial regularization parameter\nσmin=1e-10: minimal regularization value σ_min\nη1=0.1: lower model success threshold\nη2=0.9: upper model success threshold\nγ1=0.1: regularization reduction factor (for the success case)\nγ2=2.0: regularization increment factor (for the non-success case)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninitial_tangent_vector=zero_vector(M, p): initialize any tangent vector data,\nmaxIterLanczos=200: a shortcut to set the stopping criterion in the sub solver,\nρ_regularization=1e3: a regularization to avoid dividing by zero for small values of cost and model\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions:\nstopping_criterion=StopAfterIteration(40)|StopWhenGradientNormLess(1e-9)|StopWhenAllLanczosVectorsUsed(maxIterLanczos): a functor indicating that the stopping criterion is fulfilled\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective=nothing: a shortcut to modify the objective of the subproblem used within in the sub_problem= keyword By default, this is initialized as a AdaptiveRagularizationWithCubicsModelObjective, which can further be decorated by using the sub_kwargs= keyword.\nsub_state=LanczosState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nIf you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.adaptive_regularization_with_cubics!","page":"Adaptive Regularization with Cubics","title":"Manopt.adaptive_regularization_with_cubics!","text":"adaptive_regularization_with_cubics(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics(M, f, grad_f, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics(M, mho, p=rand(M); kwargs...)\nadaptive_regularization_with_cubics!(M, f, grad_f, Hess_f, p; kwargs...)\nadaptive_regularization_with_cubics!(M, f, grad_f, p; kwargs...)\nadaptive_regularization_with_cubics!(M, mho, p; kwargs...)\n\nSolve an optimization problem on the manifold M by iteratively minimizing\n\nm_k(X) = f(p_k) + X operatornamegrad f(p^(k)) + frac12X operatornameHess f(p^(k))X + fracσ_k3lVert X rVert^3\n\non the tangent space at the current iterate p_k, where X T_p_kmathcal M and σ_k 0 is a regularization parameter.\n\nLet Xp^(k) denote the minimizer of the model m_k and use the model improvement\n\n ρ_k = fracf(p_k) - f(operatornameretr_p_k(X_k))m_k(0) - m_k(X_k) + fracσ_k3lVert X_krVert^3\n\nWith two thresholds η_2 η_1 0 set p_k+1 = operatornameretr_p_k(X_k) if ρ η_1 and reject the candidate otherwise, that is, set p_k+1 = p_k.\n\nFurther update the regularization parameter using factors 0 γ_1 1 γ_2 reads\n\nσ_k+1 =\nbegincases\n maxσ_min γ_1σ_k text if ρ geq η_2 text (the model was very successful)\n σ_k text if ρ η_1 η_2)text (the model was successful)\n γ_2σ_k text if ρ η_1text (the model was unsuccessful)\nendcases\n\nFor more details see [ABBC20].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nthe cost f and its gradient and Hessian might also be provided as a ManifoldHessianObjective\n\nKeyword arguments\n\nσ=100.0 / sqrt(manifold_dimension(M): initial regularization parameter\nσmin=1e-10: minimal regularization value σ_min\nη1=0.1: lower model success threshold\nη2=0.9: upper model success threshold\nγ1=0.1: regularization reduction factor (for the success case)\nγ2=2.0: regularization increment factor (for the non-success case)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninitial_tangent_vector=zero_vector(M, p): initialize any tangent vector data,\nmaxIterLanczos=200: a shortcut to set the stopping criterion in the sub solver,\nρ_regularization=1e3: a regularization to avoid dividing by zero for small values of cost and model\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions:\nstopping_criterion=StopAfterIteration(40)|StopWhenGradientNormLess(1e-9)|StopWhenAllLanczosVectorsUsed(maxIterLanczos): a functor indicating that the stopping criterion is fulfilled\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective=nothing: a shortcut to modify the objective of the subproblem used within in the sub_problem= keyword By default, this is initialized as a AdaptiveRagularizationWithCubicsModelObjective, which can further be decorated by using the sub_kwargs= keyword.\nsub_state=LanczosState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nIf you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/adaptive-regularization-with-cubics/#State","page":"Adaptive Regularization with Cubics","title":"State","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"AdaptiveRegularizationState","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.AdaptiveRegularizationState","page":"Adaptive Regularization with Cubics","title":"Manopt.AdaptiveRegularizationState","text":"AdaptiveRegularizationState{P,T} <: AbstractHessianSolverState\n\nA state for the adaptive_regularization_with_cubics solver.\n\nFields\n\nη1, η1: bounds for evaluating the regularization parameter\nγ1, γ2: shrinking and expansion factors for regularization parameter σ\nH: the current Hessian evaluation\ns: the current solution from the subsolver\np::P: a point on the manifold mathcal Mstoring the current iterate\nq: a point for the candidates to evaluate model and ρ\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\ns: the tangent vector step resulting from minimizing the model problem in the tangent space T_pmathcal M\nσ: the current cubic regularization parameter\nσmin: lower bound for the cubic regularization parameter\nρ_regularization: regularization parameter for computing ρ. When approaching convergence ρ may be difficult to compute with numerator and denominator approaching zero. Regularizing the ratio lets ρ go to 1 near convergence.\nρ: the current regularized ratio of actual improvement and model improvement.\nρ_denominator: a value to store the denominator from the computation of ρ to allow for a warning or error when this value is non-positive.\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nFurthermore the following integral fields are defined\n\nConstructor\n\nAdaptiveRegularizationState(M, sub_problem, sub_state; kwargs...)\n\nConstruct the solver state with all fields stated as keyword arguments and the following defaults\n\nKeyword arguments\n\nη1=0.1\nη2=0.9\nγ1=0.1\nγ2=2.0\nσ=100/manifold_dimension(M)\n`σmin=1e-7\nρ_regularization=1e3\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\np=rand(M): a point on the manifold mathcal M\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(100): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\n\n\n\n\n","category":"type"},{"location":"solvers/adaptive-regularization-with-cubics/#Sub-solvers","page":"Adaptive Regularization with Cubics","title":"Sub solvers","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"There are several ways to approach the subsolver. The default is the first one.","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#arc-Lanczos","page":"Adaptive Regularization with Cubics","title":"Lanczos iteration","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"Manopt.LanczosState","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.LanczosState","page":"Adaptive Regularization with Cubics","title":"Manopt.LanczosState","text":"LanczosState{P,T,SC,B,I,R,TM,V,Y} <: AbstractManoptSolverState\n\nSolve the adaptive regularized subproblem with a Lanczos iteration\n\nFields\n\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nstop_newton::StoppingCriterion: a functor indicating that the stopping criterion is fulfilledused for the inner Newton iteration\nσ: the current regularization parameter\nX: the Iterate\nLanczos_vectors: the obtained Lanczos vectors\ntridig_matrix: the tridiagonal coefficient matrix T\ncoefficients: the coefficients y_1y_k that determine the solution\nHp: a temporary tangent vector containing the evaluation of the Hessian\nHp_residual: a temporary tangent vector containing the residual to the Hessian\nS: the current obtained / approximated solution\n\nConstructor\n\nLanczosState(TpM::TangentSpace; kwargs...)\n\nKeyword arguments\n\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mas the iterate\nmaxIterLanzcos=200: shortcut to set the maximal number of iterations in the stopping_crtierion=\nθ=0.5: set the parameter in the StopWhenFirstOrderProgress within the default stopping_criterion=.\nstopping_criterion=StopAfterIteration(maxIterLanczos)|StopWhenFirstOrderProgress(θ): a functor indicating that the stopping criterion is fulfilled\nstopping_criterion_newton=StopAfterIteration(200): a functor indicating that the stopping criterion is fulfilled used for the inner Newton iteration\nσ=10.0: specify the regularization parameter\n\n\n\n\n\n","category":"type"},{"location":"solvers/adaptive-regularization-with-cubics/#(Conjugate)-gradient-descent","page":"Adaptive Regularization with Cubics","title":"(Conjugate) gradient descent","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"There is a generic objective, that implements the sub problem","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"AdaptiveRagularizationWithCubicsModelObjective","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.AdaptiveRagularizationWithCubicsModelObjective","page":"Adaptive Regularization with Cubics","title":"Manopt.AdaptiveRagularizationWithCubicsModelObjective","text":"AdaptiveRagularizationWithCubicsModelObjective\n\nA model for the adaptive regularization with Cubics\n\nm(X) = f(p) + operatornamegrad f(p) X _p + frac12 operatornameHess f(p)X X_p\n + fracσ3 lVert X rVert^3\n\ncf. Eq. (33) in [ABBC20]\n\nFields\n\nobjective: an AbstractManifoldHessianObjective proving f, its gradient and Hessian\nσ: the current (cubic) regularization parameter\n\nConstructors\n\nAdaptiveRagularizationWithCubicsModelObjective(mho, σ=1.0)\n\nwith either an AbstractManifoldHessianObjective objective or an decorator containing such an objective.\n\n\n\n\n\n","category":"type"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"Since the sub problem is given on the tangent space, you have to provide","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"arc_obj = AdaptiveRagularizationWithCubicsModelObjective(mho, σ)\nsub_problem = DefaultProblem(TangentSpaceAt(M,p), arc_obj)","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"where mho is the Hessian objective of f to solve. Then use this for the sub_problem keyword and use your favourite gradient based solver for the sub_state keyword, for example a ConjugateGradientDescentState","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Additional-stopping-criteria","page":"Adaptive Regularization with Cubics","title":"Additional stopping criteria","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"StopWhenAllLanczosVectorsUsed\nStopWhenFirstOrderProgress","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.StopWhenAllLanczosVectorsUsed","page":"Adaptive Regularization with Cubics","title":"Manopt.StopWhenAllLanczosVectorsUsed","text":"StopWhenAllLanczosVectorsUsed <: StoppingCriterion\n\nWhen an inner iteration has used up all Lanczos vectors, then this stopping criterion is a fallback / security stopping criterion to not access a non-existing field in the array allocated for vectors.\n\nNote that this stopping criterion (for now) is only implemented for the case that an AdaptiveRegularizationState when using a LanczosState subsolver\n\nFields\n\nmaxLanczosVectors: maximal number of Lanczos vectors\nat_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.\n\nConstructor\n\nStopWhenAllLanczosVectorsUsed(maxLancosVectors::Int)\n\n\n\n\n\n","category":"type"},{"location":"solvers/adaptive-regularization-with-cubics/#Manopt.StopWhenFirstOrderProgress","page":"Adaptive Regularization with Cubics","title":"Manopt.StopWhenFirstOrderProgress","text":"StopWhenFirstOrderProgress <: StoppingCriterion\n\nA stopping criterion related to the Riemannian adaptive regularization with cubics (ARC) solver indicating that the model function at the current (outer) iterate,\n\nm_k(X) = f(p_k) + X operatornamegrad f(p^(k)) + frac12X operatornameHess f(p^(k))X + fracσ_k3lVert X rVert^3\n\ndefined on the tangent space T_pmathcal M fulfills at the current iterate X_k that\n\nm(X_k) leq m(0)\nquadtext and quad\nlVert operatornamegrad m(X_k) rVert θ lVert X_k rVert^2\n\nFields\n\nθ: the factor θ in the second condition\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\n\nConstructor\n\nStopWhenAllLanczosVectorsUsed(θ)\n\n\n\n\n\n","category":"type"},{"location":"solvers/adaptive-regularization-with-cubics/#sec-arc-technical-details","page":"Adaptive Regularization with Cubics","title":"Technical details","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"The adaptive_regularization_with_cubics requires the following functions of a manifolds to be available","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nif you do not provide an initial regularization parameter σ, a manifold_dimension is required.\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).\ninner(M, p, X, Y) is used within the algorithm step","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"Furthermore, within the Lanczos subsolver, generating a random vector (at p) using rand!(M, X; vector_at=p) in place of X is required","category":"page"},{"location":"solvers/adaptive-regularization-with-cubics/#Literature","page":"Adaptive Regularization with Cubics","title":"Literature","text":"","category":"section"},{"location":"solvers/adaptive-regularization-with-cubics/","page":"Adaptive Regularization with Cubics","title":"Adaptive Regularization with Cubics","text":"N. Agarwal, N. Boumal, B. Bullins and C. Cartis. Adaptive regularization with cubics on manifolds. Mathematical Programming (2020).\n\n\n\n","category":"page"},{"location":"solvers/trust_regions/#The-Riemannian-trust-regions-solver","page":"Trust-Regions Solver","title":"The Riemannian trust regions solver","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"Minimize a function","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"operatorname*argmin_p mathcalM f(p)","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"by using the Riemannian trust-regions solver following [ABG06] a model is build by lifting the objective at the kth iterate p_k by locally mapping the cost function f to the tangent space as f_k T_p_kmathcal M ℝ as f_k(X) = f(operatornameretr_p_k(X)). The trust region subproblem is then defined as","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"operatorname*argmin_X T_p_kmathcal M m_k(X)","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"where","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"beginalign*\nm_k T_p_Kmathcal M ℝ\nm_k(X) = f(p_k) + operatornamegrad f(p_k) X_p_k + frac12langle mathcal H_k(X)X_p_k\ntextsuch that lVert X rVert_p_k Δ_k\nendalign*","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"Here Δ_k is a trust region radius, that is adapted every iteration, and mathcal H_k is some symmetric linear operator that approximates the Hessian operatornameHess f of f.","category":"page"},{"location":"solvers/trust_regions/#Interface","page":"Trust-Regions Solver","title":"Interface","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"trust_regions\ntrust_regions!","category":"page"},{"location":"solvers/trust_regions/#Manopt.trust_regions","page":"Trust-Regions Solver","title":"Manopt.trust_regions","text":"trust_regions(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\ntrust_regions(M, f, grad_f, p=rand(M); kwargs...)\ntrust_regions!(M, f, grad_f, Hess_f, p; kwargs...)\ntrust_regions!(M, f, grad_f, p; kwargs...)\n\nrun the Riemannian trust-regions solver for optimization on manifolds to minimize f, see on [ABG06, CGT00].\n\nFor the case that no Hessian is provided, the Hessian is computed using finite differences, see ApproxHessianFiniteDifference. For solving the inner trust-region subproblem of finding an update-vector, by default the truncated_conjugate_gradient_descent is used.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nacceptance_rate: accept/reject threshold: if ρ (the performance ratio for the iterate) is at least the acceptance rate ρ', the candidate is accepted. This value should be between 0 and rac14\naugmentation_threshold=0.75: trust-region augmentation threshold: if ρ is larger than this threshold, a solution is on the trust region boundary and negative curvature, and the radius is extended (augmented)\naugmentation_factor=2.0: trust-region augmentation factor\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nκ=0.1: the linear convergence target rate of the tCG method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein\nmax_trust_region_radius: the maximum trust-region radius\npreconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nrandomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.\nρ_regularization=1e3: regularize the performance evaluation ρ to avoid numerical inaccuracies.\nreduction_factor=0.25: trust-region reduction factor\nreduction_threshold=0.1: trust-region reduction threshold: if ρ is below this threshold, the trust region radius is reduced by reduction_factor.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_stopping_criterion=( see truncated_conjugate_gradient_descent): a functor indicating that the stopping criterion is fulfilled\nsub_problem=DefaultManoptProblem(M,ConstrainedManifoldObjective(subcost, subgrad; evaluation=evaluation)): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used\nθ=1.0: the superlinear convergence target rate of 1+θ of the tCG-method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein\ntrust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius\n\nFor the case that no Hessian is provided, the Hessian is computed using finite difference, see ApproxHessianFiniteDifference.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\nSee also\n\ntruncated_conjugate_gradient_descent\n\n\n\n\n\n","category":"function"},{"location":"solvers/trust_regions/#Manopt.trust_regions!","page":"Trust-Regions Solver","title":"Manopt.trust_regions!","text":"trust_regions(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\ntrust_regions(M, f, grad_f, p=rand(M); kwargs...)\ntrust_regions!(M, f, grad_f, Hess_f, p; kwargs...)\ntrust_regions!(M, f, grad_f, p; kwargs...)\n\nrun the Riemannian trust-regions solver for optimization on manifolds to minimize f, see on [ABG06, CGT00].\n\nFor the case that no Hessian is provided, the Hessian is computed using finite differences, see ApproxHessianFiniteDifference. For solving the inner trust-region subproblem of finding an update-vector, by default the truncated_conjugate_gradient_descent is used.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nacceptance_rate: accept/reject threshold: if ρ (the performance ratio for the iterate) is at least the acceptance rate ρ', the candidate is accepted. This value should be between 0 and rac14\naugmentation_threshold=0.75: trust-region augmentation threshold: if ρ is larger than this threshold, a solution is on the trust region boundary and negative curvature, and the radius is extended (augmented)\naugmentation_factor=2.0: trust-region augmentation factor\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nκ=0.1: the linear convergence target rate of the tCG method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein\nmax_trust_region_radius: the maximum trust-region radius\npreconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nrandomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.\nρ_regularization=1e3: regularize the performance evaluation ρ to avoid numerical inaccuracies.\nreduction_factor=0.25: trust-region reduction factor\nreduction_threshold=0.1: trust-region reduction threshold: if ρ is below this threshold, the trust region radius is reduced by reduction_factor.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_stopping_criterion=( see truncated_conjugate_gradient_descent): a functor indicating that the stopping criterion is fulfilled\nsub_problem=DefaultManoptProblem(M,ConstrainedManifoldObjective(subcost, subgrad; evaluation=evaluation)): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used\nθ=1.0: the superlinear convergence target rate of 1+θ of the tCG-method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein\ntrust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius\n\nFor the case that no Hessian is provided, the Hessian is computed using finite difference, see ApproxHessianFiniteDifference.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\nSee also\n\ntruncated_conjugate_gradient_descent\n\n\n\n\n\n","category":"function"},{"location":"solvers/trust_regions/#State","page":"Trust-Regions Solver","title":"State","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"TrustRegionsState","category":"page"},{"location":"solvers/trust_regions/#Manopt.TrustRegionsState","page":"Trust-Regions Solver","title":"Manopt.TrustRegionsState","text":"TrustRegionsState <: AbstractHessianSolverState\n\nStore the state of the trust-regions solver.\n\nFields\n\nacceptance_rate: a lower bound of the performance ratio for the iterate that decides if the iteration is accepted or not.\nHX, HY, HZ: interim storage (to avoid allocation) of `\\operatorname{Hess} f(p)[⋅] of X, Y, Z\nmax_trust_region_radius: the maximum trust-region radius\np::P: a point on the manifold mathcal Mstoring the current iterate\nproject!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nrandomize: indicate whether X is initialised to a random vector or not\nρ_regularization: regularize the model fitness ρ to avoid division by zero\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nσ: Gaussian standard deviation when creating the random initial tangent vector This field has no effect, when randomize is false.\ntrust_region_radius: the trust-region radius\nX::T: a tangent vector at the point p on the manifold mathcal M\nY: the solution (tangent vector) of the subsolver\nZ: the Cauchy point (only used if random is activated)\n\nConstructors\n\nTrustRegionsState(M, mho::AbstractManifoldHessianObjective; kwargs...)\nTrustRegionsState(M, sub_problem, sub_state; kwargs...)\nTrustRegionsState(M, sub_problem; evaluation=AllocatingEvaluation(), kwargs...)\n\ncreate a trust region state.\n\ngiven a AbstractManifoldHessianObjective mho, the default sub solver, a TruncatedConjugateGradientState with mho used to define the problem on a tangent space is created\ngiven a sub_problem and an evaluation= keyword, the sub problem solver is assumed to be the closed form solution, where evaluation determines how to call the sub function.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nsub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nKeyword arguments\n\nacceptance_rate=0.1\nmax_trust_region_radius=sqrt(manifold_dimension(M))\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nproject!=copyto!\nstopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nrandomize=false\nρ_regularization=10000.0\nθ=1.0\ntrust_region_radius=max_trust_region_radius / 8\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nSee also\n\ntrust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/trust_regions/#Approximation-of-the-Hessian","page":"Trust-Regions Solver","title":"Approximation of the Hessian","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"Several different methods to approximate the Hessian are available.","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"ApproxHessianFiniteDifference\nApproxHessianSymmetricRankOne\nApproxHessianBFGS","category":"page"},{"location":"solvers/trust_regions/#Manopt.ApproxHessianFiniteDifference","page":"Trust-Regions Solver","title":"Manopt.ApproxHessianFiniteDifference","text":"ApproxHessianFiniteDifference{E, P, T, G, RTR, VTR, R <: Real} <: AbstractApproxHessian\n\nA functor to approximate the Hessian by a finite difference of gradient evaluation.\n\nGiven a point p and a direction X and the gradient operatornamegrad f(p) of a function f the Hessian is approximated as follows: let c be a stepsize, X T_pmathcal M a tangent vector and q = operatornameretr_p(fracclVert X rVert_pX) be a step in direction X of length c following a retraction Then the Hessian is approximated by the finite difference of the gradients, where mathcal T_ is a vector transport.\n\noperatornameHessf(p)X \nfraclVert X rVert_pcBigl(\n mathcal T_pgets qbigr(operatornamegradf(q)bigl) - operatornamegradf(p)\nBigl)\n\nFields\n\ngradient!!: the gradient function (either allocating or mutating, see evaluation parameter)\nstep_length: a step length for the finite difference\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nInternal temporary fields\n\ngrad_tmp: a temporary storage for the gradient at the current p\ngrad_dir_tmp: a temporary storage for the gradient at the current p_dir\np_dir::P: a temporary storage to the forward direction (or the q in the formula)\n\nConstructor\n\nApproximateFiniteDifference(M, p, grad_f; kwargs...)\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nsteplength=2^{-14} step lengthc`` to approximate the gradient evaluations\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/trust_regions/#Manopt.ApproxHessianSymmetricRankOne","page":"Trust-Regions Solver","title":"Manopt.ApproxHessianSymmetricRankOne","text":"ApproxHessianSymmetricRankOne{E, P, G, T, B<:AbstractBasis{ℝ}, VTR, R<:Real} <: AbstractApproxHessian\n\nA functor to approximate the Hessian by the symmetric rank one update.\n\nFields\n\ngradient!!: the gradient function (either allocating or mutating, see evaluation parameter).\nν: a small real number to ensure that the denominator in the update does not become too small and thus the method does not break down.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports.\n\nInternal temporary fields\n\np_tmp: a temporary storage the current point p.\ngrad_tmp: a temporary storage for the gradient at the current p.\nmatrix: a temporary storage for the matrix representation of the approximating operator.\nbasis: a temporary storage for an orthonormal basis at the current p.\n\nConstructor\n\nApproxHessianSymmetricRankOne(M, p, gradF; kwargs...)\n\nKeyword arguments\n\ninitial_operator (Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M))) the matrix representation of the initial approximating operator.\nbasis (DefaultOrthonormalBasis()) an orthonormal basis in the tangent space of the initial iterate p.\nnu (-1)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/trust_regions/#Manopt.ApproxHessianBFGS","page":"Trust-Regions Solver","title":"Manopt.ApproxHessianBFGS","text":"ApproxHessianBFGS{E, P, G, T, B<:AbstractBasis{ℝ}, VTR, R<:Real} <: AbstractApproxHessian\n\nA functor to approximate the Hessian by the BFGS update.\n\nFields\n\ngradient!! the gradient function (either allocating or mutating, see evaluation parameter).\nscale\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nInternal temporary fields\n\np_tmp a temporary storage the current point p.\ngrad_tmp a temporary storage for the gradient at the current p.\nmatrix a temporary storage for the matrix representation of the approximating operator.\nbasis a temporary storage for an orthonormal basis at the current p.\n\nConstructor\n\nApproxHessianBFGS(M, p, gradF; kwargs...)\n\nKeyword arguments\n\ninitial_operator (Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M))) the matrix representation of the initial approximating operator.\nbasis (DefaultOrthonormalBasis()) an orthonormal basis in the tangent space of the initial iterate p.\nnu (-1)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"as well as their (non-exported) common supertype","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"Manopt.AbstractApproxHessian","category":"page"},{"location":"solvers/trust_regions/#Manopt.AbstractApproxHessian","page":"Trust-Regions Solver","title":"Manopt.AbstractApproxHessian","text":"AbstractApproxHessian <: Function\n\nAn abstract supertype for approximate Hessian functions, declares them also to be functions.\n\n\n\n\n\n","category":"type"},{"location":"solvers/trust_regions/#sec-tr-technical-details","page":"Trust-Regions Solver","title":"Technical details","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"The trust_regions solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nBy default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nif you do not provide an initial max_trust_region_radius, a manifold_dimension is required.\nA `copyto!(M, q, p) and copy(M,p) for points.\nBy default the tangent vectors are initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/trust_regions/#Literature","page":"Trust-Regions Solver","title":"Literature","text":"","category":"section"},{"location":"solvers/trust_regions/","page":"Trust-Regions Solver","title":"Trust-Regions Solver","text":"P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).\n\n\n\nA. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).\n\n\n\n","category":"page"},{"location":"plans/debug/#sec-debug","page":"Debug Output","title":"Debug output","text":"","category":"section"},{"location":"plans/debug/","page":"Debug Output","title":"Debug Output","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/debug/","page":"Debug Output","title":"Debug Output","text":"Debug output can easily be added to any solver run. On the high level interfaces, like gradient_descent, you can just use the debug= keyword.","category":"page"},{"location":"plans/debug/","page":"Debug Output","title":"Debug Output","text":"Modules = [Manopt]\nPages = [\"plans/debug.jl\"]\nOrder = [:type, :function]\nPrivate = true","category":"page"},{"location":"plans/debug/#Manopt.DebugAction","page":"Debug Output","title":"Manopt.DebugAction","text":"DebugAction\n\nA DebugAction is a small functor to print/issue debug output. The usual call is given by (p::AbstractManoptProblem, s::AbstractManoptSolverState, k) -> s, where i is the current iterate.\n\nBy convention i=0 is interpreted as \"For Initialization only,\" only debug info that prints initialization reacts, i<0 triggers updates of variables internally but does not trigger any output.\n\nFields (assumed by subtypes to exist)\n\nprint method to perform the actual print. Can for example be set to a file export,\n\nor to @info. The default is the print function on the default Base.stdout.\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugChange","page":"Debug Output","title":"Manopt.DebugChange","text":"DebugChange(M=DefaultManifold(); kwargs...)\n\ndebug for the amount of change of the iterate (stored in get_iterate(o) of the AbstractManoptSolverState) during the last iteration. See DebugEntryChange for the general case\n\nKeyword parameters\n\nstorage=StoreStateAction( [:Gradient] ) storage of the previous action\nprefix=\"Last Change:\": prefix of the debug output (ignored if you set format)\nio=stdout: default stream to print the debug to.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\nthe inverse retraction to be used for approximating distance.\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugCost","page":"Debug Output","title":"Manopt.DebugCost","text":"DebugCost <: DebugAction\n\nprint the current cost function value, see get_cost.\n\nConstructors\n\nDebugCost()\n\nParameters\n\nformat=\"$prefix %f\": format to print the output\nio=stdout: default stream to print the debug to.\nlong=false: short form to set the format to f(x): (default) or current cost: and the cost\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugDivider","page":"Debug Output","title":"Manopt.DebugDivider","text":"DebugDivider <: DebugAction\n\nprint a small divider (default \" | \").\n\nConstructor\n\nDebugDivider(div,print)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugEntry","page":"Debug Output","title":"Manopt.DebugEntry","text":"DebugEntry <: DebugAction\n\nprint a certain fields entry during the iterates, where a format can be specified how to print the entry.\n\nAdditional fields\n\nfield: symbol the entry can be accessed with within AbstractManoptSolverState\n\nConstructor\n\nDebugEntry(f; prefix=\"$f:\", format = \"$prefix %s\", io=stdout)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugEntryChange","page":"Debug Output","title":"Manopt.DebugEntryChange","text":"DebugEntryChange{T} <: DebugAction\n\nprint a certain entries change during iterates\n\nAdditional fields\n\nprint: function to print the result\nprefix: prefix to the print out\nformat: format to print (uses the prefix by default and scientific notation)\nfield: Symbol the field can be accessed with within AbstractManoptSolverState\ndistance: function (p,o,x1,x2) to compute the change/distance between two values of the entry\nstorage: a StoreStateAction to store the previous value of :f\n\nConstructors\n\nDebugEntryChange(f,d)\n\nKeyword arguments\n\nio=stdout: an IOStream used for the debug\nprefix=\"Change of $f\": the prefix\nstorage=StoreStateAction((f,)): a StoreStateAction\ninitial_value=NaN: an initial value for the change of o.field.\nformat=\"$prefix %e\": format to print the change\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugEvery","page":"Debug Output","title":"Manopt.DebugEvery","text":"DebugEvery <: DebugAction\n\nevaluate and print debug only every kth iteration. Otherwise no print is performed. Whether internal variables are updates is determined by always_update.\n\nThis method does not perform any print itself but relies on it's children's print.\n\nIt also sets the subsolvers active parameter, see |DebugWhenActive}(#ref). Here, the activattion_offset can be used to specify whether it refers to this iteration, the ith, when this call is before the iteration, then the offset should be 0, for the next iteration, that is if this is called after an iteration, it has to be set to 1. Since usual debug is happening after the iteration, 1 is the default.\n\nConstructor\n\nDebugEvery(d::DebugAction, every=1, always_update=true, activation_offset=1)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugFeasibility","page":"Debug Output","title":"Manopt.DebugFeasibility","text":"DebugFeasibility <: DebugAction\n\nDisplay information about the feasibility of the current iterate\n\nFields\n\natol: absolute tolerance for when either equality or inequality constraints are counted as violated\nformat: a vector of symbols and string formatting the output\nio: default stream to print the debug to.\n\nThe following symbols are filled with values\n\n:Feasbile display true or false depending on whether the iterate is feasible\n:FeasbileEq display = or ≠ equality constraints are fulfilled or not\n:FeasbileInEq display ≤ or ≰ inequality constraints are fulfilled or not\n:NumEq display the number of equality constraints infeasible\n:NumEqNz display the number of equality constraints infeasible if exists\n:NumIneq display the number of inequality constraints infeasible\n:NumIneqNz display the number of inequality constraints infeasible if exists\n:TotalEq display the sum of how much the equality constraints are violated\n:TotalInEq display the sum of how much the inequality constraints are violated\n\nformat to print the output.\n\nConstructor\n\nDebugFeasibility( format=[\"feasible: \", :Feasible]; io::IO=stdout, atol=1e-13 )\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugGradientChange","page":"Debug Output","title":"Manopt.DebugGradientChange","text":"DebugGradientChange()\n\ndebug for the amount of change of the gradient (stored in get_gradient(o) of the AbstractManoptSolverState o) during the last iteration. See DebugEntryChange for the general case\n\nKeyword parameters\n\nstorage=StoreStateAction( (:Gradient,) ): storage of the action for previous data\nprefix=\"Last Change:\": prefix of the debug output (ignored if you set format:\nio=stdout: default stream to print the debug to.\nformat=\"$prefix %f\": format to print the output\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugGroup","page":"Debug Output","title":"Manopt.DebugGroup","text":"DebugGroup <: DebugAction\n\ngroup a set of DebugActions into one action, where the internal prints are removed by default and the resulting strings are concatenated\n\nConstructor\n\nDebugGroup(g)\n\nconstruct a group consisting of an Array of DebugActions g, that are evaluated en bloque; the method does not perform any print itself, but relies on the internal prints. It still concatenates the result and returns the complete string\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugIfEntry","page":"Debug Output","title":"Manopt.DebugIfEntry","text":"DebugIfEntry <: DebugAction\n\nIssue a warning, info, or error if a certain field does not pass a the check.\n\nThe message is printed in this case. If it contains a @printf argument identifier, that one is filled with the value of the field. That way you can print the value in this case as well.\n\nFields\n\nio: an IO stream\ncheck: a function that takes the value of the field as input and returns a boolean\nfield: symbol the entry can be accessed with within AbstractManoptSolverState\nmsg: if the check fails, this message is displayed\ntype: symbol specifying the type of display, possible values :print, : warn, :info, :error, where :print prints to io.\n\nConstructor\n\nDebugEntry(field, check=(>(0)); type=:warn, message=\":$f is nonnegative\", io=stdout)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugIterate","page":"Debug Output","title":"Manopt.DebugIterate","text":"DebugIterate <: DebugAction\n\ndebug for the current iterate (stored in get_iterate(o)).\n\nConstructor\n\nDebugIterate(; kwargs...)\n\nKeyword arguments\n\nio=stdout: default stream to print the debug to.\nformat=\"$prefix %s\": format how to print the current iterate\nlong=false: whether to have a long (\"current iterate:\") or a short (\"p:\") prefix default\nprefix: (see long for default) set a prefix to be printed before the iterate\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugIteration","page":"Debug Output","title":"Manopt.DebugIteration","text":"DebugIteration <: DebugAction\n\nConstructor\n\nDebugIteration()\n\nKeyword parameters\n\nformat=\"# %-6d\": format to print the output\nio=stdout: default stream to print the debug to.\n\ndebug for the current iteration (prefixed with # by )\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugMessages","page":"Debug Output","title":"Manopt.DebugMessages","text":"DebugMessages <: DebugAction\n\nAn AbstractManoptSolverState or one of its sub steps like a Stepsize might generate warnings throughout their computations. This debug can be used to :print them display them as :info or :warnings or even :error, depending on the message type.\n\nConstructor\n\nDebugMessages(mode=:Info, warn=:Once; io::IO=stdout)\n\nInitialize the messages debug to a certain mode. Available modes are\n\n:Error: issue the messages as an error and hence stop at any issue occurring\n:Info: issue the messages as an @info\n:Print: print messages to the steam io.\n:Warning: issue the messages as a warning\n\nThe warn level can be set to :Once to only display only the first message, to :Always to report every message, one can set it to :No, to deactivate this, then this DebugAction is inactive. All other symbols are handled as if they were :Always:\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugSolverState","page":"Debug Output","title":"Manopt.DebugSolverState","text":"DebugSolverState <: AbstractManoptSolverState\n\nThe debug state appends debug to any state, they act as a decorator pattern. Internally a dictionary is kept that stores a DebugAction for several occasions using a Symbol as reference.\n\nThe original options can still be accessed using the get_state function.\n\nFields\n\noptions: the options that are extended by debug information\ndebugDictionary: a Dict{Symbol,DebugAction} to keep track of Debug for different actions\n\nConstructors\n\nDebugSolverState(o,dA)\n\nconstruct debug decorated options, where dD can be\n\na DebugAction, then it is stored within the dictionary at :Iteration\nan Array of DebugActions.\na Dict{Symbol,DebugAction}.\nan Array of Symbols, String and an Int for the DebugFactory\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugStoppingCriterion","page":"Debug Output","title":"Manopt.DebugStoppingCriterion","text":"DebugStoppingCriterion <: DebugAction\n\nprint the Reason provided by the stopping criterion. Usually this should be empty, unless the algorithm stops.\n\nFields\n\nprefix=\"\": format to print the output\nio=stdout: default stream to print the debug to.\n\nConstructor\n\nDebugStoppingCriterion(prefix = \"\"; io::IO=stdout)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugTime","page":"Debug Output","title":"Manopt.DebugTime","text":"DebugTime()\n\nMeasure time and print the intervals. Using start=true you can start the timer on construction, for example to measure the runtime of an algorithm overall (adding)\n\nThe measured time is rounded using the given time_accuracy and printed after canonicalization.\n\nKeyword parameters\n\nio=stdout: default stream to print the debug to.\nformat=\"$prefix %s\": format to print the output, where %s is the canonicalized time`.\nmode=:cumulative: whether to display the total time or reset on every call using :iterative.\nprefix=\"Last Change:\": prefix of the debug output (ignored if you set format:\nstart=false: indicate whether to start the timer on creation or not. Otherwise it might only be started on first call.\ntime_accuracy=Millisecond(1): round the time to this period before printing the canonicalized time\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugWarnIfCostIncreases","page":"Debug Output","title":"Manopt.DebugWarnIfCostIncreases","text":"DebugWarnIfCostIncreases <: DebugAction\n\nprint a warning if the cost increases.\n\nNote that this provides an additional warning for gradient descent with its default constant step size.\n\nConstructor\n\nDebugWarnIfCostIncreases(warn=:Once; tol=1e-13)\n\nInitialize the warning to warning level (:Once) and introduce a tolerance for the test of 1e-13.\n\nThe warn level can be set to :Once to only warn the first time the cost increases, to :Always to report an increase every time it happens, and it can be set to :No to deactivate the warning, then this DebugAction is inactive. All other symbols are handled as if they were :Always:\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugWarnIfCostNotFinite","page":"Debug Output","title":"Manopt.DebugWarnIfCostNotFinite","text":"DebugWarnIfCostNotFinite <: DebugAction\n\nA debug to see when a field (value or array within the AbstractManoptSolverState is or contains values that are not finite, for example Inf or Nan.\n\nConstructor\n\nDebugWarnIfCostNotFinite(field::Symbol, warn=:Once)\n\nInitialize the warning to warn :Once.\n\nThis can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugWarnIfFieldNotFinite","page":"Debug Output","title":"Manopt.DebugWarnIfFieldNotFinite","text":"DebugWarnIfFieldNotFinite <: DebugAction\n\nA debug to see when a field from the options is not finite, for example Inf or Nan\n\nConstructor\n\nDebugWarnIfFieldNotFinite(field::Symbol, warn=:Once)\n\nInitialize the warning to warn :Once.\n\nThis can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:\n\nExample\n\nDebugWaranIfFieldNotFinite(:Gradient)\n\nCreates a [DebugAction] to track whether the gradient does not get Nan or Inf.\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugWarnIfGradientNormTooLarge","page":"Debug Output","title":"Manopt.DebugWarnIfGradientNormTooLarge","text":"DebugWarnIfGradientNormTooLarge{T} <: DebugAction\n\nA debug to warn when an evaluated gradient at the current iterate is larger than (a factor times) the maximal (recommended) stepsize at the current iterate.\n\nConstructor\n\nDebugWarnIfGradientNormTooLarge(factor::T=1.0, warn=:Once)\n\nInitialize the warning to warn :Once.\n\nThis can be set to :Once to only warn the first time the cost is Nan. It can also be set to :No to deactivate the warning, but this makes this Action also useless. All other symbols are handled as if they were :Always:\n\nExample\n\nDebugWaranIfFieldNotFinite(:Gradient)\n\nCreates a [DebugAction] to track whether the gradient does not get Nan or Inf.\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugWhenActive","page":"Debug Output","title":"Manopt.DebugWhenActive","text":"DebugWhenActive <: DebugAction\n\nevaluate and print debug only if the active boolean is set. This can be set from outside and is for example triggered by DebugEvery on debugs on the subsolver.\n\nThis method does not perform any print itself but relies on it's children's prints.\n\nFor now, the main interaction is with DebugEvery which might activate or deactivate this debug\n\nFields\n\nactive: a boolean that can (de-)activated from outside to turn on/off debug\nalways_update: whether or not to call the order debugs with iteration <=0 inactive state\n\nConstructor\n\nDebugWhenActive(d::DebugAction, active=true, always_update=true)\n\n\n\n\n\n","category":"type"},{"location":"plans/debug/#Manopt.DebugActionFactory-Tuple{String}","page":"Debug Output","title":"Manopt.DebugActionFactory","text":"DebugActionFactory(s)\n\ncreate a DebugAction where\n\na Stringyields the corresponding divider\na DebugAction is passed through\na [Symbol] creates DebugEntry of that symbol, with the exceptions of :Change, :Iterate, :Iteration, and :Cost.\na Tuple{Symbol,String} creates a DebugEntry of that symbol where the String specifies the format.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.DebugActionFactory-Tuple{Symbol}","page":"Debug Output","title":"Manopt.DebugActionFactory","text":"DebugActionFactory(s::Symbol)\n\nConvert certain Symbols in the debug=[ ... ] vector to DebugActions Currently the following ones are done. Note that the Shortcut symbols should all start with a capital letter.\n\n:Cost creates a DebugCost\n:Change creates a DebugChange\n:Gradient creates a DebugGradient\n:GradientChange creates a DebugGradientChange\n:GradientNorm creates a DebugGradientNorm\n:Iterate creates a DebugIterate\n:Iteration creates a DebugIteration\n:IterativeTime creates a DebugTime(:Iterative)\n:Stepsize creates a DebugStepsize\n:Stop creates a StoppingCriterion()\n:WarnCost creates a DebugWarnIfCostNotFinite\n:WarnGradient creates a DebugWarnIfFieldNotFinite for the ::Gradient.\n:WarnBundle creates a DebugWarnIfLagrangeMultiplierIncreases\n:Time creates a DebugTime\n:WarningMessages creates a DebugMessages(:Warning)\n:InfoMessages creates a DebugMessages(:Info)\n:ErrorMessages creates a DebugMessages(:Error)\n:Messages creates a DebugMessages() (the same as :InfoMessages)\n\nany other symbol creates a DebugEntry(s) to print the entry (o.:s) from the options.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.DebugActionFactory-Tuple{Tuple{Symbol, Any}}","page":"Debug Output","title":"Manopt.DebugActionFactory","text":"DebugActionFactory(t::Tuple{Symbol,String)\n\nConvert certain Symbols in the debug=[ ... ] vector to DebugActions Currently the following ones are done, where the string in t[2] is passed as the format the corresponding debug. Note that the Shortcut symbols t[1] should all start with a capital letter.\n\n:Change creates a DebugChange\n:Cost creates a DebugCost\n:Gradient creates a DebugGradient\n:GradientChange creates a DebugGradientChange\n:GradientNorm creates a DebugGradientNorm\n:Iterate creates a DebugIterate\n:Iteration creates a DebugIteration\n:Stepsize creates a DebugStepsize\n:Stop creates a DebugStoppingCriterion\n:Time creates a DebugTime\n:IterativeTime creates a DebugTime(:Iterative)\n\nany other symbol creates a DebugEntry(s) to print the entry (o.:s) from the options.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.DebugFactory-Tuple{Vector}","page":"Debug Output","title":"Manopt.DebugFactory","text":"DebugFactory(a::Vector)\n\nGenerate a dictionary of DebugActions.\n\nFirst all Symbols String, DebugActions and numbers are collected, excluding :Stop and :WhenActive. This collected vector is added to the :Iteration => [...] pair. :Stop is added as :StoppingCriterion to the :Stop => [...] pair. If necessary, these pairs are created\n\nFor each Pair of a Symbol and a Vector, the DebugGroupFactory is called for the Vector and the result is added to the debug dictionary's entry with said symbol. This is wrapped into the DebugWhenActive, when the :WhenActive symbol is present\n\nReturn value\n\nA dictionary for the different enrty points where debug can happen, each containing a DebugAction to call.\n\nNote that upon the initialisation all dictionaries but the :StartAlgorithm one are called with an i=0 for reset.\n\nExamples\n\nProviding a simple vector of symbols, numbers and strings like\n[:Iterate, \" | \", :Cost, :Stop, 10]\nAdds a group to :Iteration of three actions (DebugIteration, DebugDivider(\" | \"), and[DebugCost](@ref)) as a [DebugGroup](@ref) inside an [DebugEvery](@ref) to only be executed every 10th iteration. It also adds the [DebugStoppingCriterion](@ref) to the:EndAlgorithm` entry of the dictionary.\nThe same can also be written a bit more precise as\nDebugFactory([:Iteration => [:Iterate, \" | \", :Cost, 10], :Stop])\nWe can even make the stoping criterion concrete and pass Actions directly, for example explicitly Making the stop more concrete, we get\nDebugFactory([:Iteration => [:Iterate, \" | \", DebugCost(), 10], :Stop => [:Stop]])\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.DebugGroupFactory-Tuple{Vector}","page":"Debug Output","title":"Manopt.DebugGroupFactory","text":"DebugGroupFactory(a::Vector)\n\nGenerate a DebugGroup of DebugActions. The following rules are used\n\nAny Symbol is passed to DebugActionFactory\nAny (Symbol, String) generates similar actions as in 1., but the string is used for format=, see DebugActionFactory\nAny String is passed to DebugActionFactory\nAny DebugAction is included as is.\n\nIf this results in more than one DebugAction a DebugGroup of these is build.\n\nIf any integers are present, the last of these is used to wrap the group in a DebugEvery(k).\n\nIf :WhenActive is present, the resulting Action is wrapped in DebugWhenActive, making it deactivatable by its parent solver.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.reset!-Tuple{DebugTime}","page":"Debug Output","title":"Manopt.reset!","text":"reset!(d::DebugTime)\n\nreset the internal time of a DebugTime, that is start from now again.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.set_parameter!-Tuple{DebugSolverState, Val{:Debug}, Vararg{Any}}","page":"Debug Output","title":"Manopt.set_parameter!","text":"set_parameter!(ams::DebugSolverState, ::Val{:Debug}, args...)\n\nSet certain values specified by args... into the elements of the debugDictionary\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.stop!-Tuple{DebugTime}","page":"Debug Output","title":"Manopt.stop!","text":"stop!(d::DebugTime)\n\nstop the reset the internal time of a DebugTime, that is set the time to 0 (undefined)\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Technical-details","page":"Debug Output","title":"Technical details","text":"","category":"section"},{"location":"plans/debug/","page":"Debug Output","title":"Debug Output","text":"The decorator to print debug during the iterations can be activated by decorating the state of a solver and implementing your own DebugActions. For example printing a gradient from the GradientDescentState is automatically available, as explained in the gradient_descent solver.","category":"page"},{"location":"plans/debug/","page":"Debug Output","title":"Debug Output","text":"initialize_solver!(amp::AbstractManoptProblem, dss::DebugSolverState)\nstep_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)\nstop_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k::Int)","category":"page"},{"location":"plans/debug/#Manopt.initialize_solver!-Tuple{AbstractManoptProblem, DebugSolverState}","page":"Debug Output","title":"Manopt.initialize_solver!","text":"initialize_solver!(amp::AbstractManoptProblem, dss::DebugSolverState)\n\nExtend the initialization of the solver by a hook to run the DebugAction that was added to the :Start entry of the debug lists. All others are triggered (with iteration number 0) to trigger possible resets\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.step_solver!-Tuple{AbstractManoptProblem, DebugSolverState, Any}","page":"Debug Output","title":"Manopt.step_solver!","text":"step_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)\n\nExtend the ith step of the solver by a hook to run debug prints, that were added to the :BeforeIteration and :Iteration entries of the debug lists.\n\n\n\n\n\n","category":"method"},{"location":"plans/debug/#Manopt.stop_solver!-Tuple{AbstractManoptProblem, DebugSolverState, Int64}","page":"Debug Output","title":"Manopt.stop_solver!","text":"stop_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)\n\nExtend the stop_solver!, whether to stop the solver by a hook to run debug, that were added to the :Stop entry of the debug lists.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Stepsize","page":"Stepsize","title":"Stepsize and line search","text":"","category":"section"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Most iterative algorithms determine a direction along which the algorithm shall proceed and determine a step size to find the next iterate. How advanced the step size computation can be implemented depends (among others) on the properties the corresponding problem provides.","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Within Manopt.jl, the step size determination is implemented as a functor which is a subtype of Stepsize based on","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Stepsize","category":"page"},{"location":"plans/stepsize/#Manopt.Stepsize","page":"Stepsize","title":"Manopt.Stepsize","text":"Stepsize\n\nAn abstract type for the functors representing step sizes. These are callable structures. The naming scheme is TypeOfStepSize, for example ConstantStepsize.\n\nEvery Stepsize has to provide a constructor and its function has to have the interface (p,o,i) where a AbstractManoptProblem as well as AbstractManoptSolverState and the current number of iterations are the arguments and returns a number, namely the stepsize to use.\n\nFor most it is adviable to employ a ManifoldDefaultsFactory. Then the function creating the factory should either be called TypeOf or if that is confusing or too generic, TypeOfLength\n\nSee also\n\nLinesearch\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Usually, a constructor should take the manifold M as its first argument, for consistency, to allow general step size functors to be set up based on default values that might depend on the manifold currently under consideration.","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Currently, the following step sizes are available","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"AdaptiveWNGradient\nArmijoLinesearch\nConstantLength\nDecreasingLength\nNonmonotoneLinesearch\nPolyak\nWolfePowellLinesearch\nWolfePowellBinaryLinesearch","category":"page"},{"location":"plans/stepsize/#Manopt.AdaptiveWNGradient","page":"Stepsize","title":"Manopt.AdaptiveWNGradient","text":"AdaptiveWNGradient(; kwargs...)\nAdaptiveWNGradient(M::AbstractManifold; kwargs...)\n\nA stepsize based on the adaptive gradient method introduced by [GS23].\n\nGiven a positive threshold hatc ℕ, an minimal bound b_textmin 0, an initial b_0 b_textmin, and a gradient reduction factor threshold α 01).\n\nSet c_0=0 and use ω_0 = lVert operatornamegrad f(p_0) rVert_p_0.\n\nFor the first iterate use the initial step size s_0 = frac1b_0.\n\nThen, given the last gradient X_k-1 = operatornamegrad f(x_k-1), and a previous ω_k-1, the values (b_k ω_k c_k) are computed using X_k = operatornamegrad f(p_k) and the following cases\n\nIf lVert X_k rVert_p_k αω_k-1, then let hatb_k-1 b_textminb_k-1 and set\n\n(b_k ω_k c_k) = begincases\n bigl(hatb_k-1 lVert X_k rVert_p_k 0 bigr) text if c_k-1+1 = hatc\n bigl( b_k-1 + fraclVert X_k rVert_p_k^2b_k-1 ω_k-1 c_k-1+1 Bigr) text if c_k-1+1hatc\nendcases\n\nIf lVert X_k rVert_p_k αω_k-1, the set\n\n(b_k ω_k c_k) = Bigl( b_k-1 + fraclVert X_k rVert_p_k^2b_k-1 ω_k-1 0 Bigr)\n\nand return the step size s_k = frac1b_k.\n\nNote that for α=0 this is the Riemannian variant of WNGRad.\n\nKeyword arguments\n\nadaptive=true: switches the gradient_reductionα(iftrue) to0`.\nalternate_bound = (bk, hat_c) -> min(gradient_bound == 0 ? 1.0 : gradient_bound, max(minimal_bound, bk / (3 * hat_c)): how to determine hatk_k as a function of (bmin, bk, hat_c) -> hat_bk\ncount_threshold=4: an Integer for hatc\ngradient_reduction::R=adaptive ? 0.9 : 0.0: the gradient reduction factor threshold α 01)\ngradient_bound=norm(M, p, X): the bound b_k.\nminimal_bound=1e-4: the value b_textmin\np=rand(M): a point on the manifold mathcal Monly used to define the gradient_bound\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Monly used to define the gradient_bound\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.ArmijoLinesearch","page":"Stepsize","title":"Manopt.ArmijoLinesearch","text":"ArmijoLinesearch(; kwargs...)\nArmijoLinesearch(M::AbstractManifold; kwargs...)\n\nSpecify a step size that performs an Armijo line search. Given a Function fmathcal Mℝ and its Riemannian Gradient operatornamegradf mathcal MTmathcal M, the curent point pmathcal M and a search direction XT_pmathcal M.\n\nThen the step size s is found by reducing the initial step size s until\n\nf(operatornameretr_p(sX)) f(p) - τs X operatornamegradf(p) _p\n\nis fulfilled. for a sufficient decrease value τ (01).\n\nTo be a bit more optimistic, if s already fulfils this, a first search is done, increasing the given s until for a first time this step does not hold.\n\nOverall, we look for step size, that provides enough decrease, see [Bou23, p. 58] for more information.\n\nKeyword arguments\n\nadditional_decrease_condition=(M, p) -> true: specify an additional criterion that has to be met to accept a step size in the decreasing loop\nadditional_increase_condition::IF=(M, p) -> true: specify an additional criterion that has to be met to accept a step size in the (initial) increase loop\ncandidate_point=allocate_result(M, rand): speciy a point to be used as memory for the candidate points.\ncontraction_factor=0.95: how to update s in the decrease step\ninitial_stepsize=1.0`: specify an initial step size\ninitial_guess=armijo_initial_guess: Compute the initial step size of a line search based on this function. The funtion required is (p,s,k,l) -> α and computes the initial step size α based on a AbstractManoptProblem p, AbstractManoptSolverState s, the current iterate k and a last step size l.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: a safeguard, stop when the decreasing step is below this (nonnegative) bound.\nstop_when_stepsize_exceeds=max_stepsize(M): a safeguard to not choose a too long step size when initially increasing\nstop_increasing_at_step=100: stop the initial increasing loop after this amount of steps. Set to 0 to never increase in the beginning\nstop_decreasing_at_step=1000: maximal number of Armijo decreases / tests to perform\nsufficient_decrease=0.1: the sufficient decrease parameter τ\n\nFor the stop safe guards you can pass :Messages to a debug= to see @info messages when these happen.\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for ArmijoLinesearchStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.ConstantLength","page":"Stepsize","title":"Manopt.ConstantLength","text":"ConstantLength(s; kwargs...)\nConstantLength(M::AbstractManifold, s; kwargs...)\n\nSpecify a Stepsize that is constant.\n\nInput\n\nM (optional)\n\ns=min( injectivity_radius(M)/2, 1.0) : the length to use.\n\nKeyword argument\n\ntype::Symbol=relative specify the type of constant step size.\n:relative – scale the gradient tangent vector X to s*X\n:absolute – scale the gradient to an absolute step length s, that is fracslVert X rVert_X\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for ConstantStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.DecreasingLength","page":"Stepsize","title":"Manopt.DecreasingLength","text":"DegreasingLength(; kwargs...)\nDecreasingLength(M::AbstractManifold; kwargs...)\n\nSpecify a [Stepsize] that is decreasing as ``s_k = \\frac{(l - ak)f^i}{(k+s)^e} with the following\n\nKeyword arguments\n\nexponent=1.0: the exponent e in the denominator\nfactor=1.0: the factor f in the nominator\nlength=min(injectivity_radius(M)/2, 1.0): the initial step size l.\nsubtrahend=0.0: a value a that is subtracted every iteration\nshift=0.0: shift the denominator iterator k by s.\ntype::Symbol=relative specify the type of constant step size.\n:relative – scale the gradient tangent vector X to s_k*X\n:absolute – scale the gradient to an absolute step length s_k, that is fracs_klVert X rVert_X\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for DecreasingStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.NonmonotoneLinesearch","page":"Stepsize","title":"Manopt.NonmonotoneLinesearch","text":"NonmonotoneLinesearch(; kwargs...)\nNonmonotoneLinesearch(M::AbstractManifold; kwargs...)\n\nA functor representing a nonmonotone line search using the Barzilai-Borwein step size [IP17].\n\nThis method first computes\n\n(x -> p, F-> f)\n\ny_k = operatornamegradf(p_k) - mathcal T_p_kp_k-1operatornamegradf(p_k-1)\n\nand\n\ns_k = - α_k-1 mathcal T_p_kp_k-1operatornamegradf(p_k-1)\n\nwhere α_k-1 is the step size computed in the last iteration and mathcal T_ is a vector transport. Then the Barzilai—Borwein step size is\n\nα_k^textBB = begincases\n min(α_textmax max(α_textmin τ_k)) textif s_k y_k_p_k 0\n α_textmax textelse\nendcases\n\nwhere\n\nτ_k = fracs_k s_k_p_ks_k y_k_p_k\n\nif the direct strategy is chosen, or\n\nτ_k = fracs_k y_k_p_ky_k y_k_p_k\n\nin case of the inverse strategy or an alternation between the two in cases for the alternating strategy. Then find the smallest h = 0 1 2 such that\n\nf(operatornameretr_p_k(- σ^h α_k^textBB operatornamegradf(p_k))) \nmax_1 j max(k+1m) f(p_k+1-j) - γ σ^h α_k^textBB operatornamegradF(p_k) operatornamegradF(p_k)_p_k\n\nwhere σ (01) is a step length reduction factor , m is the number of iterations after which the function value has to be lower than the current one and γ (01) is the sufficient decrease parameter. Finally the step size is computed as\n\nα_k = σ^h α_k^textBB\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal Mto store an interim result\np=allocate_result(M, rand): to store an interim result\ninitial_stepsize=1.0: the step size to start the search with\nmemory_size=10: number of iterations after which the cost value needs to be lower than the current one\nbb_min_stepsize=1e-3: lower bound for the Barzilai-Borwein step size greater than zero\nbb_max_stepsize=1e3: upper bound for the Barzilai-Borwein step size greater than min_stepsize\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstrategy=direct: defines if the new step size is computed using the :direct, :indirect or :alternating strategy\nstorage=StoreStateAction(M; store_fields=[:Iterate, :Gradient]): increase efficiency by using a StoreStateAction for :Iterate and :Gradient.\nstepsize_reduction=0.5: step size reduction factor contained in the interval (01)\nsufficient_decrease=1e-4: sufficient decrease parameter contained in the interval (01)\nstop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)\nstop_when_stepsize_exceeds=max_stepsize(M, p)): largest stepsize when to stop to avoid leaving the injectivity radius\nstop_increasing_at_step=100: last step to increase the stepsize (phase 1),\nstop_decreasing_at_step=1000: last step size to decrease the stepsize (phase 2),\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.Polyak","page":"Stepsize","title":"Manopt.Polyak","text":"Polyak(; kwargs...)\nPolyak(M::AbstractManifold; kwargs...)\n\nCompute a step size according to a method propsed by Polyak, cf. the Dynamic step size discussed in Section 3.2 of [Ber15]. This has been generalised here to both the Riemannian case and to approximate the minimum cost value.\n\nLet f_textbest be the best cost value seen until now during some iterative optimisation algorithm and let γ_k be a sequence of numbers that is square summable, but not summable.\n\nThen the step size computed here reads\n\ns_k = fracf(p^(k)) - f_textbest + γ_klVert f(p^(k)) rVert_\n\nwhere f denotes a nonzero-subgradient of f at the current iterate p^(k).\n\nConstructor\n\nPolyak(; γ = k -> 1/k, initial_cost_estimate=0.0)\n\ninitialize the Polyak stepsize to a certain sequence and an initial estimate of f_\textbest.\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for PolyakStepsize. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.WolfePowellLinesearch","page":"Stepsize","title":"Manopt.WolfePowellLinesearch","text":"WolfePowellLinesearch(; kwargs...)\nWolfePowellLinesearch(M::AbstractManifold; kwargs...)\n\nPerform a lineseach to fulfull both the Armijo-Goldstein conditions\n\nfbigl( operatornameretr_p(αX) bigr) f(p) + c_1 α_k operatornamegrad f(p) X_p\n\nas well as the Wolfe conditions\n\nfracmathrmdmathrmdt fbigl(operatornameretr_p(tX)bigr)\nBigvert_t=α\n c_2 fracmathrmdmathrmdt fbigl(operatornameretr_p(tX)bigr)Bigvert_t=0\n\nfor some given sufficient decrease coefficient c_1 and some sufficient curvature condition coefficientc_2.\n\nThis is adopted from [NW06, Section 3.1]\n\nKeyword arguments\n\nsufficient_decrease=10^(-4)\nsufficient_curvature=0.999\np::P: a point on the manifold mathcal Mas temporary storage for candidates\nX::T: a tangent vector at the point p on the manifold mathcal Mas type of memory allocated for the candidates direction and tangent\nmax_stepsize=max_stepsize(M, p): largest stepsize allowed here.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/#Manopt.WolfePowellBinaryLinesearch","page":"Stepsize","title":"Manopt.WolfePowellBinaryLinesearch","text":"WolfePowellBinaryLinesearch(; kwargs...)\nWolfePowellBinaryLinesearch(M::AbstractManifold; kwargs...)\n\nPerform a lineseach to fulfull both the Armijo-Goldstein conditions for some given sufficient decrease coefficient c_1 and some sufficient curvature condition coefficientc_2. Compared to WolfePowellLinesearch which tries a simpler method, this linesearch performs the following algorithm\n\nWith\n\nA(t) = f(p_+) c_1 t operatornamegradf(p) X_x\nquadtext and quad\nW(t) = operatornamegradf(x_+) mathcal T_p_+pX_p_+ c_2 X operatornamegradf(x)_x\n\nwhere p_+ =operatornameretr_p(tX) is the current trial point, and mathcal T_ denotes a vector transport. Then the following Algorithm is performed similar to Algorithm 7 from [Hua14]\n\nset α=0, β= and t=1.\nWhile either A(t) does not hold or W(t) does not hold do steps 3-5.\nIf A(t) fails, set β=t.\nIf A(t) holds but W(t) fails, set α=t.\nIf β set t=fracα+β2, otherwise set t=2α.\n\nKeyword arguments\n\nsufficient_decrease=10^(-4)\nsufficient_curvature=0.999\nmax_stepsize=max_stepsize(M, p): largest stepsize allowed here.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"function"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Some step sizes use max_stepsize function as a rough upper estimate for the trust region size. It is by default equal to injectivity radius of the exponential map but in some cases a different value is used. For the FixedRankMatrices manifold an estimate from Manopt is used. Tangent bundle with the Sasaki metric has 0 injectivity radius, so the maximum stepsize of the underlying manifold is used instead. Hyperrectangle also has 0 injectivity radius and an estimate based on maximum of dimensions along each index is used instead. For manifolds with corners, however, a line search capable of handling break points along the projected search direction should be used, and such algorithms do not call max_stepsize.","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Internally these step size functions create a ManifoldDefaultsFactory. Internally these use","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Modules = [Manopt]\nPages = [\"plans/stepsize.jl\"]\nPrivate = true\nOrder = [:function, :type]\nFilter = t -> !(t in [Stepsize, AdaptiveWNGradient, ArmijoLinesearch, ConstantLength, DecreasingLength, NonmonotoneLinesearch, Polyak, WolfePowellLinesearch, WolfePowellBinaryLinesearch ])","category":"page"},{"location":"plans/stepsize/#Manopt.armijo_initial_guess-Tuple{AbstractManoptProblem, AbstractManoptSolverState, Int64, Real}","page":"Stepsize","title":"Manopt.armijo_initial_guess","text":"armijo_initial_guess(mp::AbstractManoptProblem, s::AbstractManoptSolverState, k, l)\n\nInput\n\nmp: the AbstractManoptProblem we are aiminig to minimize\ns: the AbstractManoptSolverState for the current solver\nk: the current iteration\nl: the last step size computed in the previous iteration.\n\nReturn an initial guess for the ArmijoLinesearchStepsize.\n\nThe default provided is based on the max_stepsize(M), which we denote by m. Let further X be the current descent direction with norm n=lVert X rVert_p its length. Then this (default) initial guess returns\n\nl if m is not finite\nmin(l fracmn) otherwise\n\nThis ensures that the initial guess does not yield to large (initial) steps.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.default_stepsize-Tuple{AbstractManifold, Type{<:AbstractManoptSolverState}}","page":"Stepsize","title":"Manopt.default_stepsize","text":"default_stepsize(M::AbstractManifold, ams::AbstractManoptSolverState)\n\nReturns the default Stepsize functor used when running the solver specified by the AbstractManoptSolverState ams running with an objective on the AbstractManifold M.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.get_last_stepsize-Tuple{AbstractManoptProblem, AbstractManoptSolverState, Vararg{Any}}","page":"Stepsize","title":"Manopt.get_last_stepsize","text":"get_last_stepsize(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, vars...)\n\nreturn the last computed stepsize stored within AbstractManoptSolverState ams when solving the AbstractManoptProblem amp.\n\nThis method takes into account that ams might be decorated. In case this returns NaN, a concrete call to the stored stepsize is performed. For this, usually, the first of the vars... should be the current iterate.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.get_last_stepsize-Tuple{Stepsize, Vararg{Any}}","page":"Stepsize","title":"Manopt.get_last_stepsize","text":"get_last_stepsize(::Stepsize, vars...)\n\nreturn the last computed stepsize from within the stepsize. If no last step size is stored, this returns NaN.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.get_stepsize-Tuple{AbstractManoptProblem, AbstractManoptSolverState, Vararg{Any}}","page":"Stepsize","title":"Manopt.get_stepsize","text":"get_stepsize(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, vars...)\n\nreturn the stepsize stored within AbstractManoptSolverState ams when solving the AbstractManoptProblem amp. This method also works for decorated options and the Stepsize function within the options, by default stored in ams.stepsize.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.linesearch_backtrack!-Union{Tuple{T}, Tuple{TF}, Tuple{AbstractManifold, Any, TF, Any, T, Any, Any, Any}, Tuple{AbstractManifold, Any, TF, Any, T, Any, Any, Any, T}, Tuple{AbstractManifold, Any, TF, Any, T, Any, Any, Any, T, Any}} where {TF, T}","page":"Stepsize","title":"Manopt.linesearch_backtrack!","text":"(s, msg) = linesearch_backtrack!(M, q, F, p, X, s, decrease, contract η = -X, f0 = f(p))\n\nPerform a line search backtrack in-place of q. For all details and options, see linesearch_backtrack\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.linesearch_backtrack-Union{Tuple{T}, Tuple{AbstractManifold, Any, Any, T, Any, Any, Any}, Tuple{AbstractManifold, Any, Any, T, Any, Any, Any, T}, Tuple{AbstractManifold, Any, Any, T, Any, Any, Any, T, Any}} where T","page":"Stepsize","title":"Manopt.linesearch_backtrack","text":"(s, msg) = linesearch_backtrack(M, F, p, X, s, decrease, contract η = -X, f0 = f(p); kwargs...)\n(s, msg) = linesearch_backtrack!(M, q, F, p, X, s, decrease, contract η = -X, f0 = f(p); kwargs...)\n\nperform a line search\n\non manifold M\nfor the cost function f,\nat the current point p\nwith current gradient provided in X\nan initial stepsize s\na sufficient decrease\na contraction factor σ\na search direction η = -X\nan offset, f_0 = F(x)\n\nKeyword arguments\n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: to avoid numerical underflow\nstop_when_stepsize_exceeds=max_stepsize(M, p) / norm(M, p, η)) to avoid leaving the injectivity radius on a manifold\nstop_increasing_at_step=100: stop the initial increase of step size after these many steps\nstop_decreasing_at_step=1000`: stop the decreasing search after these many steps\nadditional_increase_condition=(M,p) -> true: impose an additional condition for an increased step size to be accepted\nadditional_decrease_condition=(M,p) -> true: impose an additional condition for an decreased step size to be accepted\n\nThese keywords are used as safeguards, where only the max stepsize is a very manifold specific one.\n\nReturn value\n\nA stepsize s and a message msg (in case any of the 4 criteria hit)\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.max_stepsize-Tuple{AbstractManifold, Any}","page":"Stepsize","title":"Manopt.max_stepsize","text":"max_stepsize(M::AbstractManifold, p)\nmax_stepsize(M::AbstractManifold)\n\nGet the maximum stepsize (at point p) on manifold M. It should be used to limit the distance an algorithm is trying to move in a single step.\n\nBy default, this returns injectivity_radius(M), if this exists. If this is not available on the the method returns Inf.\n\n\n\n\n\n","category":"method"},{"location":"plans/stepsize/#Manopt.AdaptiveWNGradientStepsize","page":"Stepsize","title":"Manopt.AdaptiveWNGradientStepsize","text":"AdaptiveWNGradientStepsize{I<:Integer,R<:Real,F<:Function} <: Stepsize\n\nA functor problem, state, k, X) -> s to an adaptive gradient method introduced by [GrapigliaStella:2023](@cite). See [AdaptiveWNGradient`](@ref) for the mathematical details.\n\nFields\n\ncount_threshold::I: an Integer for hatc\nminimal_bound::R: the value for b_textmin\nalternate_bound::F: how to determine hatk_k as a function of (bmin, bk, hat_c) -> hat_bk\ngradient_reduction::R: the gradient reduction factor threshold α 01)\ngradient_bound::R: the bound b_k.\nweight::R: ω_k initialised to ω_0 =norm(M, p, X) if this is not zero, 1.0 otherwise.\ncount::I: c_k, initialised to c_0 = 0.\n\nConstructor\n\nAdaptiveWNGrad(M::AbstractManifold; kwargs...)\n\nKeyword arguments\n\nadaptive=true: switches the gradient_reductionα(iftrue) to0`.\nalternate_bound = (bk, hat_c) -> min(gradient_bound == 0 ? 1.0 : gradient_bound, max(minimal_bound, bk / (3 * hat_c))\ncount_threshold=4\ngradient_reduction::R=adaptive ? 0.9 : 0.0\ngradient_bound=norm(M, p, X)\nminimal_bound=1e-4\np=rand(M): a point on the manifold mathcal Monly used to define the gradient_bound\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Monly used to define the gradient_bound\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.ArmijoLinesearchStepsize","page":"Stepsize","title":"Manopt.ArmijoLinesearchStepsize","text":"ArmijoLinesearchStepsize <: Linesearch\n\nA functor problem, state, k, X) -> s to provide an Armijo line search to compute step size, based on the search directionX`\n\nFields\n\ncandidate_point: to store an interim result\ninitial_stepsize: and initial step size\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ncontraction_factor: exponent for line search reduction\nsufficient_decrease: gain within Armijo's rule\nlast_stepsize: the last step size to start the search with\ninitial_guess: a function to provide an initial guess for the step size, it maps (m,p,k,l) -> α based on a AbstractManoptProblem p, AbstractManoptSolverState s, the current iterate k and a last step size l. It returns the initial guess α.\nadditional_decrease_condition: specify a condition a new point has to additionally fulfill. The default accepts all points.\nadditional_increase_condition: specify a condtion that additionally to checking a valid increase has to be fulfilled. The default accepts all points.\nstop_when_stepsize_less: smallest stepsize when to stop (the last one before is taken)\nstop_when_stepsize_exceeds: largest stepsize when to stop.\nstop_increasing_at_step: last step to increase the stepsize (phase 1),\nstop_decreasing_at_step: last step size to decrease the stepsize (phase 2),\n\nPass :Messages to a debug= to see @infos when these happen.\n\nConstructor\n\nArmijoLinesearchStepsize(M::AbstractManifold; kwarg...)\n\nwith the fields keyword arguments and the retraction is set to the default retraction on M.\n\nKeyword arguments\n\ncandidate_point=(allocate_result(M, rand))\ninitial_stepsize=1.0\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ncontraction_factor=0.95\nsufficient_decrease=0.1\nlast_stepsize=initialstepsize\ninitial_guess=armijo_initial_guess– (p,s,i,l) -> l\nstop_when_stepsize_less=0.0: stop when the stepsize decreased below this version.\nstop_when_stepsize_exceeds=[max_step](@ref)(M)`: provide an absolute maximal step size.\nstop_increasing_at_step=100: for the initial increase test, stop after these many steps\nstop_decreasing_at_step=1000: in the backtrack, stop after these many steps\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.ConstantStepsize","page":"Stepsize","title":"Manopt.ConstantStepsize","text":"ConstantStepsize <: Stepsize\n\nA functor (problem, state, ...) -> s to provide a constant step size s.\n\nFields\n\nlength: constant value for the step size\ntype: a symbol that indicates whether the stepsize is relatively (:relative), with respect to the gradient norm, or absolutely (:absolute) constant.\n\nConstructors\n\nConstantStepsize(s::Real, t::Symbol=:relative)\n\ninitialize the stepsize to a constant s of type t.\n\nConstantStepsize(\n M::AbstractManifold=DefaultManifold(),\n s=min(1.0, injectivity_radius(M)/2);\n type::Symbol=:relative\n)\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.DecreasingStepsize","page":"Stepsize","title":"Manopt.DecreasingStepsize","text":"DecreasingStepsize()\n\nA functor (problem, state, ...) -> s to provide a constant step size s.\n\nFields\n\nexponent: a value e the current iteration numbers eth exponential is taken of\nfactor: a value f to multiply the initial step size with every iteration\nlength: the initial step size l.\nsubtrahend: a value a that is subtracted every iteration\nshift: shift the denominator iterator i by s`.\ntype: a symbol that indicates whether the stepsize is relatively (:relative), with respect to the gradient norm, or absolutely (:absolute) constant.\n\nIn total the complete formulae reads for the ith iterate as\n\ns_i = frac(l - i a)f^i(i+s)^e\n\nand hence the default simplifies to just s_i = \fracli\n\nConstructor\n\nDecreasingStepsize(M::AbstractManifold;\n length=min(injectivity_radius(M)/2, 1.0),\n factor=1.0,\n subtrahend=0.0,\n exponent=1.0,\n shift=0.0,\n type=:relative,\n)\n\ninitializes all fields, where none of them is mandatory and the length is set to half and to 1 if the injectivity radius is infinite.\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.Linesearch","page":"Stepsize","title":"Manopt.Linesearch","text":"Linesearch <: Stepsize\n\nAn abstract functor to represent line search type step size determinations, see Stepsize for details. One example is the ArmijoLinesearchStepsize functor.\n\nCompared to simple step sizes, the line search functors provide an interface of the form (p,o,i,X) -> s with an additional (but optional) fourth parameter to provide a search direction; this should default to something reasonable, most prominently the negative gradient.\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.NonmonotoneLinesearchStepsize","page":"Stepsize","title":"Manopt.NonmonotoneLinesearchStepsize","text":"NonmonotoneLinesearchStepsize{P,T,R<:Real} <: Linesearch\n\nA functor representing a nonmonotone line search using the Barzilai-Borwein step size [IP17].\n\nFields\n\ninitial_stepsize=1.0: the step size to start the search with\nmemory_size=10: number of iterations after which the cost value needs to be lower than the current one\nbb_min_stepsize=1e-3: lower bound for the Barzilai-Borwein step size greater than zero\nbb_max_stepsize=1e3: upper bound for the Barzilai-Borwein step size greater than min_stepsize\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstrategy=direct: defines if the new step size is computed using the :direct, :indirect or :alternating strategy\nstorage: (for :Iterate and :Gradient) a StoreStateAction\nstepsize_reduction: step size reduction factor contained in the interval (0,1)\nsufficient_decrease: sufficient decrease parameter contained in the interval (0,1)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\ncandidate_point: to store an interim result\nstop_when_stepsize_less: smallest stepsize when to stop (the last one before is taken)\nstop_when_stepsize_exceeds: largest stepsize when to stop.\nstop_increasing_at_step: last step to increase the stepsize (phase 1),\nstop_decreasing_at_step: last step size to decrease the stepsize (phase 2),\n\nConstructor\n\nNonmonotoneLinesearchStepsize(M::AbstractManifold; kwargs...)\n\nKeyword arguments\n\np=allocate_result(M, rand): to store an interim result\ninitial_stepsize=1.0\nmemory_size=10\nbb_min_stepsize=1e-3\nbb_max_stepsize=1e3\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstrategy=direct\nstorage=[StoreStateAction](@ref)(M; store_fields=[:Iterate, :Gradient])``\nstepsize_reduction=0.5\nsufficient_decrease=1e-4\nstop_when_stepsize_less=0.0\nstop_when_stepsize_exceeds=max_stepsize(M, p))\nstop_increasing_at_step=100\nstop_decreasing_at_step=1000\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.PolyakStepsize","page":"Stepsize","title":"Manopt.PolyakStepsize","text":"PolyakStepsize <: Stepsize\n\nA functor (problem, state, ...) -> s to provide a step size due to Polyak, cf. Section 3.2 of [Ber15].\n\nFields\n\nγ : a function k -> ... representing a seuqnce.\nbest_cost_value : storing the best cost value\n\nConstructor\n\nPolyakStepsize(;\n γ = i -> 1/i,\n initial_cost_estimate=0.0\n)\n\nConstruct a stepsize of Polyak type.\n\nSee also\n\nPolyak\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.WolfePowellBinaryLinesearchStepsize","page":"Stepsize","title":"Manopt.WolfePowellBinaryLinesearchStepsize","text":"WolfePowellBinaryLinesearchStepsize{R} <: Linesearch\n\nDo a backtracking line search to find a step size α that fulfils the Wolfe conditions along a search direction X starting from p. See WolfePowellBinaryLinesearch for the math details.\n\nFields\n\nsufficient_decrease::R, sufficient_curvature::R two constants in the line search\nlast_stepsize::R\nmax_stepsize::R\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less::R: a safeguard to stop when the stepsize gets too small\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nKeyword arguments\n\nsufficient_decrease=10^(-4)\nsufficient_curvature=0.999\nmax_stepsize=max_stepsize(M, p): largest stepsize allowed here.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Manopt.WolfePowellLinesearchStepsize","page":"Stepsize","title":"Manopt.WolfePowellLinesearchStepsize","text":"WolfePowellLinesearchStepsize{R<:Real} <: Linesearch\n\nDo a backtracking line search to find a step size α that fulfils the Wolfe conditions along a search direction X starting from p. See WolfePowellLinesearch for the math details\n\nFields\n\nsufficient_decrease::R, sufficient_curvature::R two constants in the line search\ncandidate_direction::T: a tangent vector at the point p on the manifold mathcal M\ncandidate_point::P: a point on the manifold mathcal Mas temporary storage for candidates\ncandidate_tangent::T: a tangent vector at the point p on the manifold mathcal M\nlast_stepsize::R\nmax_stepsize::R\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less::R: a safeguard to stop when the stepsize gets too small\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nKeyword arguments\n\nsufficient_decrease=10^(-4)\nsufficient_curvature=0.999\np::P: a point on the manifold mathcal Mas temporary storage for candidates\nX::T: a tangent vector at the point p on the manifold mathcal Mas type of memory allocated for the candidates direction and tangent\nmax_stepsize=max_stepsize(M, p): largest stepsize allowed here.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstop_when_stepsize_less=0.0: smallest stepsize when to stop (the last one before is taken)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"Some solvers have a different iterate from the one used for the line search. Then the following state can be used to wrap these locally","category":"page"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"StepsizeState","category":"page"},{"location":"plans/stepsize/#Manopt.StepsizeState","page":"Stepsize","title":"Manopt.StepsizeState","text":"StepsizeState{P,T} <: AbstractManoptSolverState\n\nA state to store a point and a descent direction used within a linesearch, if these are different from the iterate and search direction of the main solver.\n\nFields\n\np::P: a point on a manifold\nX::T: a tangent vector at p.\n\nConstructor\n\nStepsizeState(p,X)\nStepsizeState(M::AbstractManifold; p=rand(M), x=zero_vector(M,p)\n\nSee also\n\ninterior_point_Newton\n\n\n\n\n\n","category":"type"},{"location":"plans/stepsize/#Literature","page":"Stepsize","title":"Literature","text":"","category":"section"},{"location":"plans/stepsize/","page":"Stepsize","title":"Stepsize","text":"D. P. Bertsekas. Convex Optimization Algorithms (Athena Scientific, 2015); p. 576.\n\n\n\nN. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\nG. N. Grapiglia and G. F. Stella. An Adaptive Riemannian Gradient Method Without Function Evaluations. Journal of Optimization Theory and Applications 197, 1140–1160 (2023).\n\n\n\nW. Huang. Optimization algorithms on Riemannian manifolds with applications. Ph.D. Thesis, Flordia State University (2014).\n\n\n\nB. Iannazzo and M. Porcelli. The Riemannian Barzilai–Borwein method with nonmonotone line search and the matrix geometric mean computation. IMA Journal of Numerical Analysis 38, 495–517 (2017).\n\n\n\nJ. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).\n\n\n\n","category":"page"},{"location":"#Welcome-to-Manopt.jl","page":"Home","title":"Welcome to Manopt.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = Manopt","category":"page"},{"location":"","page":"Home","title":"Home","text":"Manopt.Manopt","category":"page"},{"location":"#Manopt.Manopt","page":"Home","title":"Manopt.Manopt","text":"🏔️ Manopt.jl: optimization on Manifolds in Julia.\n\n📚 Documentation: manoptjl.org\n📦 Repository: github.com/JuliaManifolds/Manopt.jl\n💬 Discussions: github.com/JuliaManifolds/Manopt.jl/discussions\n🎯 Issues: github.com/JuliaManifolds/Manopt.jl/issues\n\n\n\n\n\n","category":"module"},{"location":"","page":"Home","title":"Home","text":"For a function fmathcal M ℝ defined on a Riemannian manifold mathcal M algorithms in this package aim to solve","category":"page"},{"location":"","page":"Home","title":"Home","text":"operatorname*argmin_p mathcal M f(p)","category":"page"},{"location":"","page":"Home","title":"Home","text":"or in other words: find the point p on the manifold, where f reaches its minimal function value.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Manopt.jl provides a framework for optimization on manifolds as well as a Library of optimization algorithms in Julia. It belongs to the “Manopt family”, which includes Manopt (Matlab) and pymanopt.org (Python).","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you want to delve right into Manopt.jl read the 🏔️ Get started: optimize. tutorial.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Manopt.jl makes it easy to use an algorithm for your favourite manifold as well as a manifold for your favourite algorithm. It already provides many manifolds and algorithms, which can easily be enhanced, for example to record certain data or debug output throughout iterations.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you use Manopt.jlin your work, please cite the following","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{Bergmann2022,\n Author = {Ronny Bergmann},\n Doi = {10.21105/joss.03866},\n Journal = {Journal of Open Source Software},\n Number = {70},\n Pages = {3866},\n Publisher = {The Open Journal},\n Title = {Manopt.jl: Optimization on Manifolds in {J}ulia},\n Volume = {7},\n Year = {2022},\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"To refer to a certain version or the source code in general cite for example","category":"page"},{"location":"","page":"Home","title":"Home","text":"@software{manoptjl-zenodo-mostrecent,\n Author = {Ronny Bergmann},\n Copyright = {MIT License},\n Doi = {10.5281/zenodo.4290905},\n Publisher = {Zenodo},\n Title = {Manopt.jl},\n Year = {2024},\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"for the most recent version or a corresponding version specific DOI, see the list of all versions.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you are also using Manifolds.jl please consider to cite","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{AxenBaranBergmannRzecki:2023,\n AUTHOR = {Axen, Seth D. and Baran, Mateusz and Bergmann, Ronny and Rzecki, Krzysztof},\n ARTICLENO = {33},\n DOI = {10.1145/3618296},\n JOURNAL = {ACM Transactions on Mathematical Software},\n MONTH = {dec},\n NUMBER = {4},\n TITLE = {Manifolds.Jl: An Extensible Julia Framework for Data Analysis on Manifolds},\n VOLUME = {49},\n YEAR = {2023}\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Note that both citations are in BibLaTeX format.","category":"page"},{"location":"#Main-features","page":"Home","title":"Main features","text":"","category":"section"},{"location":"#Optimization-algorithms-(solvers)","page":"Home","title":"Optimization algorithms (solvers)","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"For every optimization algorithm, a solver is implemented based on a AbstractManoptProblem that describes the problem to solve and its AbstractManoptSolverState that set up the solver, and stores values that are required between or for the next iteration. Together they form a plan.","category":"page"},{"location":"#Manifolds","page":"Home","title":"Manifolds","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This project is build upon ManifoldsBase.jl, a generic interface to implement manifolds. Certain functions are extended for specific manifolds from Manifolds.jl, but all other manifolds from that package can be used here, too.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The notation in the documentation aims to follow the same notation from these packages.","category":"page"},{"location":"#Visualization","page":"Home","title":"Visualization","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"To visualize and interpret results, Manopt.jl aims to provide both easy plot functions as well as exports. Furthermore a system to get debug during the iterations of an algorithms as well as record capabilities, for example to record a specified tuple of values per iteration, most prominently RecordCost and RecordIterate. Take a look at the 🏔️ Get started: optimize. tutorial on how to easily activate this.","category":"page"},{"location":"#Literature","page":"Home","title":"Literature","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"If you want to get started with manifolds, one book is [Car92], and if you want do directly dive into optimization on manifolds, good references are [AMS08] and [Bou23], which are both available online for free","category":"page"},{"location":"","page":"Home","title":"Home","text":"P.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.\n\n\n\nN. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\nM. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.\n\n\n\n","category":"page"},{"location":"references/#Literature","page":"References","title":"Literature","text":"","category":"section"},{"location":"references/","page":"References","title":"References","text":"This is all literature mentioned / referenced in the Manopt.jl documentation. Usually you find a small reference section at the end of every documentation page that contains the corresponding references as well.","category":"page"},{"location":"references/","page":"References","title":"References","text":"P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).\n\n\n\nP.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.\n\n\n\nS. Adachi, T. Okuno and A. Takeda. Riemannian Levenberg-Marquardt Method with Global and Local Convergence Properties. ArXiv Preprint (2022).\n\n\n\nN. Agarwal, N. Boumal, B. Bullins and C. Cartis. Adaptive regularization with cubics on manifolds. Mathematical Programming (2020).\n\n\n\nY. T. Almeida, J. X. Cruz Neto, P. R. Oliveira and J. C. Oliveira Souza. A modified proximal point method for DC functions on Hadamard manifolds. Computational Optimization and Applications 76, 649–673 (2020).\n\n\n\nM. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.\n\n\n\nE. M. Beale. A derivation of conjugate gradients. In: Numerical methods for nonlinear optimization, edited by F. A. Lootsma (Academic Press, London, London, 1972); pp. 39–43.\n\n\n\nR. Bergmann, O. P. Ferreira, E. M. Santos and J. C. Souza. The difference of convex algorithm on Hadamard manifolds, arXiv preprint (2023).\n\n\n\nR. Bergmann and P.-Y. Gousenbourger. A variational model for data fitting on manifolds by minimizing the acceleration of a Bézier curve. Frontiers in Applied Mathematics and Statistics 4 (2018), arXiv:1807.10090.\n\n\n\nR. Bergmann and R. Herzog. Intrinsic formulation of KKT conditions and constraint qualifications on smooth manifolds. SIAM Journal on Optimization 29, 2423–2444 (2019), arXiv:1804.06214.\n\n\n\nR. Bergmann, R. Herzog and H. Jasa. The Riemannian Convex Bundle Method, preprint (2024), arXiv:2402.13670.\n\n\n\nR. Bergmann, R. Herzog, M. Silva Louzeiro, D. Tenbrinck and J. Vidal-Núñez. Fenchel duality theory and a primal-dual algorithm on Riemannian manifolds. Foundations of Computational Mathematics 21, 1465–1504 (2021), arXiv:1908.02022.\n\n\n\nR. Bergmann, J. Persch and G. Steidl. A parallel Douglas Rachford algorithm for minimizing ROF-like functionals on images with values in symmetric Hadamard manifolds. SIAM Journal on Imaging Sciences 9, 901–937 (2016), arXiv:1512.02814.\n\n\n\nD. P. Bertsekas. Convex Optimization Algorithms (Athena Scientific, 2015); p. 576.\n\n\n\nP. B. Borckmans, M. Ishteva and P.-A. Absil. A Modified Particle Swarm Optimization Algorithm for the Best Low Multilinear Rank Approximation of Higher-Order Tensors. In: 7th International Conference on Swarm INtelligence (Springer Berlin Heidelberg, 2010); pp. 13–23.\n\n\n\nN. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\nM. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.\n\n\n\nA. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40, 120–145 (2011).\n\n\n\nS. Colutto, F. Fruhauf, M. Fuchs and O. Scherzer. The CMA-ES on Riemannian Manifolds to Reconstruct Shapes in 3-D Voxel Images. IEEE Transactions on Evolutionary Computation 14, 227–245 (2010).\n\n\n\nA. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).\n\n\n\nY. H. Dai and Y. Yuan. A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM Journal on Optimization 10, 177–182 (1999).\n\n\n\nW. Diepeveen and J. Lellmann. An Inexact Semismooth Newton Method on Riemannian Manifolds with Application to Duality-Based Total Variation Denoising. SIAM Journal on Imaging Sciences 14, 1565–1600 (2021), arXiv:2102.10309.\n\n\n\nA. S. El-Bakry, R. A. Tapia, T. Tsuchiya and Y. Zhang. On the formulation and theory of the Newton interior-point method for nonlinear programming. Journal of Optimization Theory and Applications 89, 507–541 (1996).\n\n\n\nO. Ferreira and P. R. Oliveira. Subgradient algorithm on Riemannian manifolds. Journal of Optimization Theory and Applications 97, 93–104 (1998).\n\n\n\nO. Ferreira and P. R. Oliveira. Proximal point algorithm on Riemannian manifolds. Optimization. A Journal of Mathematical Programming and Operations Research 51, 257–270 (2002).\n\n\n\nP. T. Fletcher. Geodesic regression and the theory of least squares on Riemannian manifolds. International Journal of Computer Vision 105, 171–185 (2013).\n\n\n\nR. Fletcher. Practical Methods of Optimization. 2 Edition, A Wiley-Interscience Publication (John Wiley & Sons Ltd., 1987).\n\n\n\nR. Fletcher and C. M. Reeves. Function minimization by conjugate gradients. The Computer Journal 7, 149–154 (1964).\n\n\n\nG. N. Grapiglia and G. F. Stella. An Adaptive Riemannian Gradient Method Without Function Evaluations. Journal of Optimization Theory and Applications 197, 1140–1160 (2023).\n\n\n\nW. W. Hager and H. Zhang. A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization 2, 35–58 (2006).\n\n\n\nW. W. Hager and H. Zhang. A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search. SIAM Journal on Optimization 16, 170–192 (2005).\n\n\n\nN. Hansen. The CMA Evolution Strategy: A Tutorial. ArXiv Preprint (2023).\n\n\n\nM. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards 49, 409 (1952).\n\n\n\nN. Hoseini Monjezi, S. Nobakhtian and M. R. Pouryayevali. A proximal bundle algorithm for nonsmooth optimization on Riemannian manifolds. IMA Journal of Numerical Analysis 43, 293–325 (2023).\n\n\n\nW. Huang. Optimization algorithms on Riemannian manifolds with applications. Ph.D. Thesis, Flordia State University (2014).\n\n\n\nW. Huang, P.-A. Absil and K. A. Gallivan. A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems. SIAM Journal on Optimization 28, 470–495 (2018).\n\n\n\nW. Huang, K. A. Gallivan and P.-A. Absil. A Broyden class of quasi-Newton methods for Riemannian optimization. SIAM Journal on Optimization 25, 1660–1685 (2015).\n\n\n\nB. Iannazzo and M. Porcelli. The Riemannian Barzilai–Borwein method with nonmonotone line search and the matrix geometric mean computation. IMA Journal of Numerical Analysis 38, 495–517 (2017).\n\n\n\nH. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).\n\n\n\nZ. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.\n\n\n\nC. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.\n\n\n\nY. Liu and C. Storey. Efficient generalized conjugate gradient algorithms, part 1: Theory. Journal of Optimization Theory and Applications 69, 129–137 (1991).\n\n\n\nD. Nguyen. Operator-Valued Formulas for Riemannian Gradient and Hessian and Families of Tractable Metrics in Riemannian Optimization. Journal of Optimization Theory and Applications 198, 135–164 (2023), arXiv:2009.10159.\n\n\n\nJ. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).\n\n\n\nR. Peeters. On a Riemannian version of the Levenberg-Marquardt algorithm. Serie Research Memoranda 0011 (VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics, 1993).\n\n\n\nE. Polak and G. Ribière. Note sur la convergence de méthodes de directions conjuguées. Revue française d’informatique et de recherche opérationnelle 3, 35–43 (1969).\n\n\n\nM. J. Powell. Restart procedures for the conjugate gradient method. Mathematical Programming 12, 241–254 (1977).\n\n\n\nJ. C. Souza and P. R. Oliveira. A proximal point algorithm for DC fuctions on Hadamard manifolds. Journal of Global Optimization 63, 797–810 (2015).\n\n\n\nM. Weber and S. Sra. Riemannian Optimization via Frank-Wolfe Methods. Mathematical Programming 199, 525–556 (2022).\n\n\n\nH. Zhang and S. Sra. Towards Riemannian accelerated gradient methods, arXiv Preprint, 1806.02812 (2018).\n\n\n\n","category":"page"},{"location":"tutorials/StochasticGradientDescent/#How-to-run-stochastic-gradient-descent","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"","category":"section"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"This tutorial illustrates how to use the stochastic_gradient_descent solver and different DirectionUpdateRules to introduce the average or momentum variant, see Stochastic Gradient Descent.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"Computationally, we look at a very simple but large scale problem, the Riemannian Center of Mass or Fréchet mean: for given points p_i mathcal M, i=1N this optimization problem reads","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"operatorname*argmin_xmathcal M frac12sum_i=1^N\n operatornamed^2_mathcal M(xp_i)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"which of course can be (and is) solved by a gradient descent, see the introductory tutorial or Statistics in Manifolds.jl. If N is very large, evaluating the complete gradient might be quite expensive. A remedy is to evaluate only one of the terms at a time and choose a random order for these.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"We first initialize the packages","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"using Manifolds, Manopt, Random, BenchmarkTools, ManifoldDiff\nusing ManifoldDiff: grad_distance\nRandom.seed!(42);","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"We next generate a (little) large(r) data set","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"n = 5000\nσ = π / 12\nM = Sphere(2)\np = 1 / sqrt(2) * [1.0, 0.0, 1.0]\ndata = [exp(M, p, σ * rand(M; vector_at=p)) for i in 1:n];","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"Note that due to the construction of the points as zero mean tangent vectors, the mean should be very close to our initial point p.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"In order to use the stochastic gradient, we now need a function that returns the vector of gradients. There are two ways to define it in Manopt.jl: either as a single function that returns a vector, or as a vector of functions.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"The first variant is of course easier to define, but the second is more efficient when only evaluating one of the gradients.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"For the mean, the gradient is","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"operatornamegradf(p) = sum_i=1^N operatornamegradf_i(x) quad textwhere operatornamegradf_i(x) = -log_x p_i","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"which we define in Manopt.jl in two different ways: either as one function returning all gradients as a vector (see gradF), or, maybe more fitting for a large scale problem, as a vector of small gradient functions (see gradf)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"F(M, p) = 1 / (2 * n) * sum(map(q -> distance(M, p, q)^2, data))\ngradF(M, p) = [grad_distance(M, p, q) for q in data]\ngradf = [(M, p) -> grad_distance(M, q, p) for q in data];\np0 = 1 / sqrt(3) * [1.0, 1.0, 1.0]","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n 0.5773502691896258\n 0.5773502691896258\n 0.5773502691896258","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"The calls are only slightly different, but notice that accessing the second gradient element requires evaluating all logs in the first function, while we only call one of the functions in the second array of functions. So while you can use both gradF and gradf in the following call, the second one is (much) faster:","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"p_opt1 = stochastic_gradient_descent(M, gradF, p)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n -0.4124602512237471\n 0.7450900936719854\n 0.38494647999455556","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"@benchmark stochastic_gradient_descent($M, $gradF, $p0)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"BenchmarkTools.Trial: 1 sample with 1 evaluation.\n Single result which took 6.745 s (9.20% GC) to evaluate,\n with a memory estimate of 7.83 GiB, over 200213003 allocations.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"p_opt2 = stochastic_gradient_descent(M, gradf, p0)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n 0.6828818855405705\n 0.17545293717581142\n 0.7091463863243863","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"@benchmark stochastic_gradient_descent($M, $gradf, $p0)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"BenchmarkTools.Trial: 2418 samples with 1 evaluation.\n Range (min … max): 645.651 μs … 13.692 ms ┊ GC (min … max): 0.00% … 83.74%\n Time (median): 1.673 ms ┊ GC (median): 0.00%\n Time (mean ± σ): 2.064 ms ± 1.297 ms ┊ GC (mean ± σ): 7.64% ± 12.73%\n\n ▄▆▅▆▄▂▅▂▁▁ █ \n ███████████▇█▅▆▆▆▅▆▄▄▄▅▄▃▃▇█▆▃▂▂▁▁▁▁▂▁▁▂▁▁▂▂▁▁▁▁▂▁▁▁▁▁▁▂▁▁▁▂ ▃\n 646 μs Histogram: frequency by time 6.66 ms <\n\n Memory estimate: 861.16 KiB, allocs estimate: 20050.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"This result is reasonably close. But we can improve it by using a DirectionUpdateRule, namely:","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"On the one hand MomentumGradient, which requires both the manifold and the initial value, to keep track of the iterate and parallel transport the last direction to the current iterate. The necessary vector_transport_method keyword is set to a suitable default on every manifold, see default_vector_transport_method. We get ““”","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"p_opt3 = stochastic_gradient_descent(\n M, gradf, p0; direction=MomentumGradient(; direction=StochasticGradient())\n)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n 0.46671468324066123\n -0.3797901161381924\n 0.7987095042199683","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"MG = MomentumGradient(; direction=StochasticGradient());\n@benchmark stochastic_gradient_descent($M, $gradf, p=$p0; direction=$MG)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"BenchmarkTools.Trial: 758 samples with 1 evaluation.\n Range (min … max): 5.351 ms … 19.265 ms ┊ GC (min … max): 0.00% … 49.66%\n Time (median): 5.819 ms ┊ GC (median): 0.00%\n Time (mean ± σ): 6.587 ms ± 1.647 ms ┊ GC (mean ± σ): 9.89% ± 14.09%\n\n ▇█▇▇▅▄▄▃▂▁▂▂▁▁ ▁ ▁▁▂ ▁ \n ███████████████▆▅█▇▃▄▃▅▃▃▃▄▄▄▄▆▅▆▆█████▆▆█▃▆▇█▇▅▇▄▅▇▄▅▅▄▃▄ █\n 5.35 ms Histogram: log(frequency) by time 10.8 ms <\n\n Memory estimate: 7.71 MiB, allocs estimate: 200052.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"And on the other hand the AverageGradient computes an average of the last n gradients. This is done by","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"p_opt4 = stochastic_gradient_descent(\n M, gradf, p0; direction=AverageGradient(; n=10, direction=StochasticGradient()), debug=[],\n)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n 0.5834888085913609\n 0.7756423891832663\n 0.2406651082951343","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"AG = AverageGradient(; n=10, direction=StochasticGradient(M));\n@benchmark stochastic_gradient_descent($M, $gradf, p=$p0; direction=$AG, debug=[])","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"BenchmarkTools.Trial: 205 samples with 1 evaluation.\n Range (min … max): 20.092 ms … 44.055 ms ┊ GC (min … max): 0.00% … 38.10%\n Time (median): 23.228 ms ┊ GC (median): 0.00%\n Time (mean ± σ): 24.400 ms ± 3.185 ms ┊ GC (mean ± σ): 8.50% ± 8.32%\n\n ▂▆█ ▁▁ \n ▃▃▁▃▄▅███▆▃▂▂▁▁▁▁▁▂▃▄▅████▆▅▃▂▂▃▁▂▁▁▂▁▁▁▂▁▁▁▁▂▁▂▁▁▁▁▁▁▂▁▁▁▂ ▃\n 20.1 ms Histogram: frequency by time 34.9 ms <\n\n Memory estimate: 21.90 MiB, allocs estimate: 600077.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"Note that the default StoppingCriterion is a fixed number of iterations which helps the comparison here.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"For both update rules we have to internally specify that we are still in the stochastic setting, since both rules can also be used with the IdentityUpdateRule within gradient_descent.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"For this not-that-large-scale example we can of course also use a gradient descent with ArmijoLinesearch,","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"fullGradF(M, p) = 1/n*sum(grad_distance(M, q, p) for q in data)\np_opt5 = gradient_descent(M, F, fullGradF, p0; stepsize=ArmijoLinesearch())","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"3-element Vector{Float64}:\n 0.7050420977039097\n -0.006374163035874202\n 0.7091368066253959","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"but in general it is expected to be a bit slow.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"AL = ArmijoLinesearch();\n@benchmark gradient_descent($M, $F, $fullGradF, $p0; stepsize=$AL)","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"BenchmarkTools.Trial: 23 samples with 1 evaluation.\n Range (min … max): 215.369 ms … 243.399 ms ┊ GC (min … max): 8.75% … 4.88%\n Time (median): 219.790 ms ┊ GC (median): 9.23%\n Time (mean ± σ): 221.107 ms ± 6.691 ms ┊ GC (mean ± σ): 9.09% ± 1.34%\n\n █ █ ▃ ▃ ▃ \n █▁▇█▇▁█▁▇█▇▇▇▁▇▁▇▁█▇▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▇▁▁▁▁▁▁▁▁▁▁▁▁▇ ▁\n 215 ms Histogram: frequency by time 243 ms <\n\n Memory estimate: 230.56 MiB, allocs estimate: 6338502.","category":"page"},{"location":"tutorials/StochasticGradientDescent/#Technical-details","page":"How to run stochastic gradient descent","title":"Technical details","text":"","category":"section"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `..`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/StochasticGradientDescent/","page":"How to run stochastic gradient descent","title":"How to run stochastic gradient descent","text":"2024-11-21T20:41:43.615","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"EditURL = \"https://github.com/JuliaManifolds/Manopt.jl/blob/master/CONTRIBUTING.md\"","category":"page"},{"location":"contributing/#Contributing-to-Manopt.jl","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"First, thanks for taking the time to contribute. Any contribution is appreciated and welcome.","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"The following is a set of guidelines to Manopt.jl.","category":"page"},{"location":"contributing/#Table-of-contents","page":"Contributing to Manopt.jl","title":"Table of contents","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Contributing to Manopt.jl - Table of Contents\nI just have a question\nHow can I file an issue?\nHow can I contribute?\nAdd a missing method\nProvide a new algorithm\nProvide a new example\nCode style","category":"page"},{"location":"contributing/#I-just-have-a-question","page":"Contributing to Manopt.jl","title":"I just have a question","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"The developer can most easily be reached in the Julia Slack channel #manifolds. You can apply for the Julia Slack workspace here if you haven't joined yet. You can also ask your question on discourse.julialang.org.","category":"page"},{"location":"contributing/#How-can-I-file-an-issue?","page":"Contributing to Manopt.jl","title":"How can I file an issue?","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"If you found a bug or want to propose a feature, please open an issue in within the GitHub repository.","category":"page"},{"location":"contributing/#How-can-I-contribute?","page":"Contributing to Manopt.jl","title":"How can I contribute?","text":"","category":"section"},{"location":"contributing/#Add-a-missing-method","page":"Contributing to Manopt.jl","title":"Add a missing method","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"There is still a lot of methods for within the optimization framework of Manopt.jl, may it be functions, gradients, differentials, proximal maps, step size rules or stopping criteria. If you notice a method missing and can contribute an implementation, please do so, and the maintainers try help with the necessary details. Even providing a single new method is a good contribution.","category":"page"},{"location":"contributing/#Provide-a-new-algorithm","page":"Contributing to Manopt.jl","title":"Provide a new algorithm","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"A main contribution you can provide is another algorithm that is not yet included in the package. An algorithm is always based on a concrete type of a AbstractManoptProblem storing the main information of the task and a concrete type of an AbstractManoptSolverState storing all information that needs to be known to the solver in general. The actual algorithm is split into an initialization phase, see initialize_solver!, and the implementation of the ith step of the solver itself, see before the iterative procedure, see step_solver!. For these two functions, it would be great if a new algorithm uses functions from the ManifoldsBase.jl interface as generically as possible. For example, if possible use retract!(M,q,p,X) in favor of exp!(M,q,p,X) to perform a step starting in p in direction X (in place of q), since the exponential map might be too expensive to evaluate or might not be available on a certain manifold. See Retractions and inverse retractions for more details. Further, if possible, prefer retract!(M,q,p,X) in favor of retract(M,p,X), since a computation in place of a suitable variable q reduces memory allocations.","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Usually, the methods implemented in Manopt.jl also have a high-level interface, that is easier to call, creates the necessary problem and options structure and calls the solver.","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"The two technical functions initialize_solver! and step_solver! should be documented with technical details, while the high level interface should usually provide a general description and some literature references to the algorithm at hand.","category":"page"},{"location":"contributing/#Provide-a-new-example","page":"Contributing to Manopt.jl","title":"Provide a new example","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Example problems are available at ManoptExamples.jl, where also their reproducible Quarto-Markdown files are stored.","category":"page"},{"location":"contributing/#Code-style","page":"Contributing to Manopt.jl","title":"Code style","text":"","category":"section"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Try to follow the documentation guidelines from the Julia documentation as well as Blue Style. Run JuliaFormatter.jl on the repository in the way set in the .JuliaFormatter.toml file, which enforces a number of conventions consistent with the Blue Style. Furthermore vale is run on both Markdown and code files, affecting documentation and source code comments","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Please follow a few internal conventions:","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"It is preferred that the AbstractManoptProblem's struct contains information about the general structure of the problem.\nAny implemented function should be accompanied by its mathematical formulae if a closed form exists.\nAbstractManoptProblem and helping functions are stored within the plan/ folder and sorted by properties of the problem and/or solver at hand.\nthe solver state is usually stored with the solver itself\nWithin the source code of one algorithm, following the state, the high level interface should be next, then the initialization, then the step.\nOtherwise an alphabetical order of functions is preferable.\nThe preceding implies that the mutating variant of a function follows the non-mutating variant.\nThere should be no dangling = signs.\nAlways add a newline between things of different types (struct/method/const).\nAlways add a newline between methods for different functions (including mutating/nonmutating variants).\nPrefer to have no newline between methods for the same function; when reasonable, merge the documentation strings.\nAll import/using/include should be in the main module file.","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"Concerning documentation","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"if possible provide both mathematical formulae and literature references using DocumenterCitations.jl and BibTeX where possible\nAlways document all input variables and keyword arguments","category":"page"},{"location":"contributing/","page":"Contributing to Manopt.jl","title":"Contributing to Manopt.jl","text":"If you implement an algorithm with a certain numerical example in mind, it would be great, if this could be added to the ManoptExamples.jl package as well.","category":"page"},{"location":"helpers/checks/#Verifying-gradients-and-Hessians","page":"Checks","title":"Verifying gradients and Hessians","text":"","category":"section"},{"location":"helpers/checks/","page":"Checks","title":"Checks","text":"If you have computed a gradient or differential and you are not sure whether it is correct.","category":"page"},{"location":"helpers/checks/","page":"Checks","title":"Checks","text":"Modules = [Manopt]\nPages = [\"checks.jl\"]","category":"page"},{"location":"helpers/checks/#Manopt.check_Hessian","page":"Checks","title":"Manopt.check_Hessian","text":"check_Hessian(M, f, grad_f, Hess_f, p=rand(M), X=rand(M; vector_at=p), Y=rand(M, vector_at=p); kwargs...)\n\nVerify numerically whether the Hessian Hess_f(M,p, X) of f(M,p) is correct.\n\nFor this either a second-order retraction or a critical point p of f is required. The approximation is then\n\nf(operatornameretr_p(tX)) = f(p) + toperatornamegrad f(p) X + fract^22operatornameHessf(p)X X + mathcal O(t^3)\n\nor in other words, that the error between the function f and its second order Taylor behaves in error mathcal O(t^3), which indicates that the Hessian is correct, cf. also [Bou23, Section 6.8].\n\nNote that if the errors are below the given tolerance and the method is exact, no plot is generated.\n\nKeyword arguments\n\ncheck_grad=true: verify that operatornamegradf(p) T_pmathcal M.\ncheck_linearity=true: verify that the Hessian is linear, see is_Hessian_linear using a, b, X, and Y\ncheck_symmetry=true: verify that the Hessian is symmetric, see is_Hessian_symmetric\ncheck_vector=false: verify that \\operatorname{Hess} f(p)[X] ∈ T_{p}\\mathcal Musingis_vector`.\nmode=:Default: specify the mode for the verification; the default assumption is, that the retraction provided is of second order. Otherwise one can also verify the Hessian if the point p is a critical point. THen set the mode to :CritalPoint to use gradient_descent to find a critical point. Note: this requires (and evaluates) new tangent vectors X and Y\natol, rtol: (same defaults as isapprox) tolerances that are passed down to all checks\na, b two real values to verify linearity of the Hessian (if check_linearity=true)\nN=101: number of points to verify within the log_range default range 10^-810^0\nexactness_tol=1e-12: if all errors are below this tolerance, the verification is considered to be exact\nio=nothing: provide an IO to print the result to\ngradient=grad_f(M, p): instead of the gradient function you can also provide the gradient at p directly\nHessian=Hess_f(M, p, X): instead of the Hessian function you can provide the result of operatornameHess f(p)X directly. Note that evaluations of the Hessian might still be necessary for checking linearity and symmetry and/or when using :CriticalPoint mode.\nlimits=(1e-8,1): specify the limits in the log_range\nlog_range=range(limits[1], limits[2]; length=N): specify the range of points (in log scale) to sample the Hessian line\nN=101: number of points to use within the log_range default range 10^-810^0\nplot=false: whether to plot the resulting verification (requires Plots.jl to be loaded). The plot is in log-log-scale. This is returned and can then also be saved.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nslope_tol=0.1: tolerance for the slope (global) of the approximation\nerror=:none: how to handle errors, possible values: :error, :info, :warn\nwindow=nothing: specify window sizes within the log_range that are used for the slope estimation. the default is, to use all window sizes 2:N.\n\nThe kwargs... are also passed down to the check_vector and the check_gradient call, such that tolerances can easily be set.\n\nWhile check_vector is also passed to the inner call to check_gradient as well as the retraction_method, this inner check_gradient is meant to be just for inner verification, so it does not throw an error nor produce a plot itself.\n\n\n\n\n\n","category":"function"},{"location":"helpers/checks/#Manopt.check_differential","page":"Checks","title":"Manopt.check_differential","text":"check_differential(M, F, dF, p=rand(M), X=rand(M; vector_at=p); kwargs...)\n\nCheck numerically whether the differential dF(M,p,X) of F(M,p) is correct.\n\nThis implements the method described in [Bou23, Section 4.8].\n\nNote that if the errors are below the given tolerance and the method is exact, no plot is generated,\n\nKeyword arguments\n\nexactness_tol=1e-12: if all errors are below this tolerance, the differential is considered to be exact\nio=nothing: provide an IO to print the result to\nlimits=(1e-8,1): specify the limits in the log_range\nlog_range=range(limits[1], limits[2]; length=N): specify the range of points (in log scale) to sample the differential line\nN=101: number of points to verify within the log_range default range 10^-810^0\nname=\"differential\": name to display in the plot\nplot=false: whether to plot the result (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nslope_tol=0.1: tolerance for the slope (global) of the approximation\nthrow_error=false: throw an error message if the differential is wrong\nwindow=nothing: specify window sizes within the log_range that are used for the slope estimation. The default is, to use all window sizes 2:N.\n\n\n\n\n\n","category":"function"},{"location":"helpers/checks/#Manopt.check_gradient","page":"Checks","title":"Manopt.check_gradient","text":"check_gradient(M, f, grad_f, p=rand(M), X=rand(M; vector_at=p); kwargs...)\n\nVerify numerically whether the gradient grad_f(M,p) of f(M,p) is correct, that is whether\n\nf(operatornameretr_p(tX)) = f(p) + toperatornamegrad f(p) X + mathcal O(t^2)\n\nor in other words, that the error between the function f and its first order Taylor behaves in error mathcal O(t^2), which indicates that the gradient is correct, cf. also [Bou23, Section 4.8].\n\nNote that if the errors are below the given tolerance and the method is exact, no plot is generated.\n\nKeyword arguments\n\ncheck_vector=true: verify that operatornamegradf(p) T_pmathcal M using is_vector.\nexactness_tol=1e-12: if all errors are below this tolerance, the gradient is considered to be exact\nio=nothing: provide an IO to print the result to\ngradient=grad_f(M, p): instead of the gradient function you can also provide the gradient at p directly\nlimits=(1e-8,1): specify the limits in the log_range\nlog_range=range(limits[1], limits[2]; length=N):\nspecify the range of points (in log scale) to sample the gradient line\nN=101: number of points to verify within the log_range default range 10^-810^0\nplot=false: whether to plot the result (if Plots.jl is loaded). The plot is in log-log-scale. This is returned and can then also be saved.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nslope_tol=0.1: tolerance for the slope (global) of the approximation\natol=:none`:\n\naults as=nothing: hat are passed down toisvectorifcheckvectoris set totrue`\n\nerror=:none: how to handle errors, possible values: :error, :info, :warn\nwindow=nothing: specify window sizes within the log_range that are used for the slope estimation. the default is, to use all window sizes 2:N.\n\nThe remaining keyword arguments are also passed down to the check_vector call, such that tolerances can easily be set.\n\n\n\n\n\n","category":"function"},{"location":"helpers/checks/#Manopt.is_Hessian_linear","page":"Checks","title":"Manopt.is_Hessian_linear","text":"is_Hessian_linear(M, Hess_f, p,\n X=rand(M; vector_at=p), Y=rand(M; vector_at=p), a=randn(), b=randn();\n error=:none, io=nothing, kwargs...\n)\n\nVerify whether the Hessian function Hess_f fulfills linearity,\n\noperatornameHess f(p)aX + bY = boperatornameHess f(p)X\n + boperatornameHess f(p)Y\n\nwhich is checked using isapprox and the keyword arguments are passed to this function.\n\nOptional arguments\n\nerror=:none: how to handle errors, possible values: :error, :info, :warn\n\n\n\n\n\n","category":"function"},{"location":"helpers/checks/#Manopt.is_Hessian_symmetric","page":"Checks","title":"Manopt.is_Hessian_symmetric","text":"is_Hessian_symmetric(M, Hess_f, p=rand(M), X=rand(M; vector_at=p), Y=rand(M; vector_at=p);\nerror=:none, io=nothing, atol::Real=0, rtol::Real=atol>0 ? 0 : √eps\n\n)\n\nVerify whether the Hessian function Hess_f fulfills symmetry, which means that\n\noperatornameHess f(p)X Y = X operatornameHess f(p)Y\n\nwhich is checked using isapprox and the kwargs... are passed to this function.\n\nOptional arguments\n\natol, rtol with the same defaults as the usual isapprox\nerror=:none: how to handle errors, possible values: :error, :info, :warn\n\n\n\n\n\n","category":"function"},{"location":"helpers/checks/#Literature","page":"Checks","title":"Literature","text":"","category":"section"},{"location":"helpers/checks/","page":"Checks","title":"Checks","text":"N. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\n","category":"page"},{"location":"solvers/difference_of_convex/#Difference-of-convex","page":"Difference of Convex","title":"Difference of convex","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/difference_of_convex/#solver-difference-of-convex","page":"Difference of Convex","title":"Difference of convex algorithm","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"difference_of_convex_algorithm\ndifference_of_convex_algorithm!","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.difference_of_convex_algorithm","page":"Difference of Convex","title":"Manopt.difference_of_convex_algorithm","text":"difference_of_convex_algorithm(M, f, g, ∂h, p=rand(M); kwargs...)\ndifference_of_convex_algorithm(M, mdco, p; kwargs...)\ndifference_of_convex_algorithm!(M, f, g, ∂h, p; kwargs...)\ndifference_of_convex_algorithm!(M, mdco, p; kwargs...)\n\nCompute the difference of convex algorithm [BFSS23] to minimize\n\n operatornameargmin_pmathcal M g(p) - h(p)\n\nwhere you need to provide f(p) = g(p) - h(p), g and the subdifferential h of h.\n\nThis algorithm performs the following steps given a start point p= p^(0). Then repeat for k=01\n\nTake X^(k) h(p^(k))\nSet the next iterate to the solution of the subproblem\n\n p^(k+1) operatornameargmin_q mathcal M g(q) - X^(k) log_p^(k)q\n\nuntil the stopping criterion (see the stopping_criterion keyword is fulfilled.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ngradient=nothing: specify operatornamegrad f, for debug / analysis or enhancing the stopping_criterion=\ngrad_g=nothing: specify the gradient of g. If specified, a subsolver is automatically set up.\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8): a functor indicating that the stopping criterion is fulfilled\ng=nothing: specify the function g If specified, a subsolver is automatically set up.\nsub_cost=LinearizedDCCost(g, p, initial_vector): a cost to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=LinearizedDCGrad(grad_g, p, initial_vector; evaluation=evaluation): gradient to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_stopping_criterion=StopAfterIteration(300)|StopWhenStepsizeLess(1e-9)|StopWhenGradientNormLess(1e-9): a stopping criterion used withing the default sub_state= This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_stepsize=ArmijoLinesearch(M)) specify a step size used within the sub_state. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/difference_of_convex/#Manopt.difference_of_convex_algorithm!","page":"Difference of Convex","title":"Manopt.difference_of_convex_algorithm!","text":"difference_of_convex_algorithm(M, f, g, ∂h, p=rand(M); kwargs...)\ndifference_of_convex_algorithm(M, mdco, p; kwargs...)\ndifference_of_convex_algorithm!(M, f, g, ∂h, p; kwargs...)\ndifference_of_convex_algorithm!(M, mdco, p; kwargs...)\n\nCompute the difference of convex algorithm [BFSS23] to minimize\n\n operatornameargmin_pmathcal M g(p) - h(p)\n\nwhere you need to provide f(p) = g(p) - h(p), g and the subdifferential h of h.\n\nThis algorithm performs the following steps given a start point p= p^(0). Then repeat for k=01\n\nTake X^(k) h(p^(k))\nSet the next iterate to the solution of the subproblem\n\n p^(k+1) operatornameargmin_q mathcal M g(q) - X^(k) log_p^(k)q\n\nuntil the stopping criterion (see the stopping_criterion keyword is fulfilled.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ngradient=nothing: specify operatornamegrad f, for debug / analysis or enhancing the stopping_criterion=\ngrad_g=nothing: specify the gradient of g. If specified, a subsolver is automatically set up.\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8): a functor indicating that the stopping criterion is fulfilled\ng=nothing: specify the function g If specified, a subsolver is automatically set up.\nsub_cost=LinearizedDCCost(g, p, initial_vector): a cost to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=LinearizedDCGrad(grad_g, p, initial_vector; evaluation=evaluation): gradient to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_stopping_criterion=StopAfterIteration(300)|StopWhenStepsizeLess(1e-9)|StopWhenGradientNormLess(1e-9): a stopping criterion used withing the default sub_state= This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_stepsize=ArmijoLinesearch(M)) specify a step size used within the sub_state. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/difference_of_convex/#solver-difference-of-convex-proximal-point","page":"Difference of Convex","title":"Difference of convex proximal point","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"difference_of_convex_proximal_point\ndifference_of_convex_proximal_point!","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.difference_of_convex_proximal_point","page":"Difference of Convex","title":"Manopt.difference_of_convex_proximal_point","text":"difference_of_convex_proximal_point(M, grad_h, p=rand(M); kwargs...)\ndifference_of_convex_proximal_point(M, mdcpo, p=rand(M); kwargs...)\ndifference_of_convex_proximal_point!(M, grad_h, p; kwargs...)\ndifference_of_convex_proximal_point!(M, mdcpo, p; kwargs...)\n\nCompute the difference of convex proximal point algorithm [SO15] to minimize\n\n operatornameargmin_pmathcal M g(p) - h(p)\n\nwhere you have to provide the subgradient h of h and either\n\nthe proximal map operatornameprox_λg of g as a function prox_g(M, λ, p) or prox_g(M, q, λ, p)\nthe functions g and grad_g to compute the proximal map using a sub solver\nyour own sub-solver, specified by sub_problem=and sub_state=\n\nThis algorithm performs the following steps given a start point p= p^(0). Then repeat for k=01\n\nX^(k) operatornamegrad h(p^(k))\nq^(k) = operatornameretr_p^(k)(λ_kX^(k))\nr^(k) = operatornameprox_λ_kg(q^(k))\nX^(k) = operatornameretr^-1_p^(k)(r^(k))\nCompute a stepsize s_k and\nset p^(k+1) = operatornameretr_p^(k)(s_kX^(k)).\n\nuntil the stopping_criterion is fulfilled.\n\nSee [ACOO20] for more details on the modified variant, where steps 4-6 are slightly changed, since here the classical proximal point method for DC functions is obtained for s_k = 1 and one can hence employ usual line search method.\n\nKeyword arguments\n\nλ: ( k -> 1/2 ) a function returning the sequence of prox parameters λ_k\ncost=nothing: provide the cost f, for debug reasons / analysis\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ngradient=nothing: specify operatornamegrad f, for debug / analysis or enhancing the stopping_criterion\nprox_g=nothing: specify a proximal map for the sub problem or both of the following\ng=nothing: specify the function g.\ngrad_g=nothing: specify the gradient of g. If both gand grad_g are specified, a subsolver is automatically set up.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=ConstantLength(): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8)): a functor indicating that the stopping criterion is fulfilled A StopWhenGradientNormLess(1e-8) is added with |, when a gradient is provided.\nsub_cost=ProximalDCCost(g, copy(M, p), λ(1))): cost to be used within the default sub_problem that is initialized as soon as g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=ProximalDCGrad(grad_g, copy(M, p), λ(1); evaluation=evaluation): gradient to be used within the default sub_problem, that is initialized as soon as grad_g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_stopping_criterion=(StopAfterIteration(300)|[StopWhenGradientNormLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/difference_of_convex/#Manopt.difference_of_convex_proximal_point!","page":"Difference of Convex","title":"Manopt.difference_of_convex_proximal_point!","text":"difference_of_convex_proximal_point(M, grad_h, p=rand(M); kwargs...)\ndifference_of_convex_proximal_point(M, mdcpo, p=rand(M); kwargs...)\ndifference_of_convex_proximal_point!(M, grad_h, p; kwargs...)\ndifference_of_convex_proximal_point!(M, mdcpo, p; kwargs...)\n\nCompute the difference of convex proximal point algorithm [SO15] to minimize\n\n operatornameargmin_pmathcal M g(p) - h(p)\n\nwhere you have to provide the subgradient h of h and either\n\nthe proximal map operatornameprox_λg of g as a function prox_g(M, λ, p) or prox_g(M, q, λ, p)\nthe functions g and grad_g to compute the proximal map using a sub solver\nyour own sub-solver, specified by sub_problem=and sub_state=\n\nThis algorithm performs the following steps given a start point p= p^(0). Then repeat for k=01\n\nX^(k) operatornamegrad h(p^(k))\nq^(k) = operatornameretr_p^(k)(λ_kX^(k))\nr^(k) = operatornameprox_λ_kg(q^(k))\nX^(k) = operatornameretr^-1_p^(k)(r^(k))\nCompute a stepsize s_k and\nset p^(k+1) = operatornameretr_p^(k)(s_kX^(k)).\n\nuntil the stopping_criterion is fulfilled.\n\nSee [ACOO20] for more details on the modified variant, where steps 4-6 are slightly changed, since here the classical proximal point method for DC functions is obtained for s_k = 1 and one can hence employ usual line search method.\n\nKeyword arguments\n\nλ: ( k -> 1/2 ) a function returning the sequence of prox parameters λ_k\ncost=nothing: provide the cost f, for debug reasons / analysis\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ngradient=nothing: specify operatornamegrad f, for debug / analysis or enhancing the stopping_criterion\nprox_g=nothing: specify a proximal map for the sub problem or both of the following\ng=nothing: specify the function g.\ngrad_g=nothing: specify the gradient of g. If both gand grad_g are specified, a subsolver is automatically set up.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=ConstantLength(): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8)): a functor indicating that the stopping criterion is fulfilled A StopWhenGradientNormLess(1e-8) is added with |, when a gradient is provided.\nsub_cost=ProximalDCCost(g, copy(M, p), λ(1))): cost to be used within the default sub_problem that is initialized as soon as g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=ProximalDCGrad(grad_g, copy(M, p), λ(1); evaluation=evaluation): gradient to be used within the default sub_problem, that is initialized as soon as grad_g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_stopping_criterion=(StopAfterIteration(300)|[StopWhenGradientNormLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/difference_of_convex/#Solver-states","page":"Difference of Convex","title":"Solver states","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"DifferenceOfConvexState\nDifferenceOfConvexProximalState","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.DifferenceOfConvexState","page":"Difference of Convex","title":"Manopt.DifferenceOfConvexState","text":"DifferenceOfConvexState{Pr,St,P,T,SC<:StoppingCriterion} <:\n AbstractManoptSolverState\n\nA struct to store the current state of the [difference_of_convex_algorithm])(@ref). It comes in two forms, depending on the realisation of the subproblem.\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring a subgradient at the current iterate\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\n\nThe sub task consists of a method to solve\n\n operatornameargmin_qmathcal M g(p) - X log_p q\n\nis needed. Besides a problem and a state, one can also provide a function and an AbstractEvaluationType, respectively, to indicate a closed form solution for the sub task.\n\nConstructors\n\nDifferenceOfConvexState(M, sub_problem, sub_state; kwargs...)\nDifferenceOfConvexState(M, sub_solver; evaluation=InplaceEvaluation(), kwargs...)\n\nGenerate the state either using a solver from Manopt, given by an AbstractManoptProblem sub_problem and an AbstractManoptSolverState sub_state, or a closed form solution sub_solver for the sub-problem the function expected to be of the form (M, p, X) -> q or (M, q, p, X) -> q, where by default its AbstractEvaluationType evaluation is in-place of q. Here the elements passed are the current iterate p and the subgradient X of h can be passed to that function.\n\nfurther keyword arguments\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopAfterIteration(200): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/#Manopt.DifferenceOfConvexProximalState","page":"Difference of Convex","title":"Manopt.DifferenceOfConvexProximalState","text":"DifferenceOfConvexProximalState{P, T, Pr, St, S<:Stepsize, SC<:StoppingCriterion, RTR<:AbstractRetractionMethod, ITR<:AbstractInverseRetractionMethod}\n <: AbstractSubProblemSolverState\n\nA struct to store the current state of the algorithm as well as the form. It comes in two forms, depending on the realisation of the subproblem.\n\nFields\n\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\np::P: a point on the manifold mathcal Mstoring the current iterate\nq::P: a point on the manifold mathcal M storing the gradient step\nr::P: a point on the manifold mathcal M storing the result of the proximal map\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nX, Y: the current gradient and descent direction, respectively their common type is set by the keyword X\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nConstructor\n\nDifferenceOfConvexProximalState(M::AbstractManifold, sub_problem, sub_state; kwargs...)\n\nconstruct an difference of convex proximal point state\n\nDifferenceOfConvexProximalState(M::AbstractManifold, sub_problem;\n evaluation=AllocatingEvaluation(), kwargs...\n\n)\n\nconstruct an difference of convex proximal point state, where sub_problem is a closed form solution with evaluation as type of evaluation.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nsub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nKeyword arguments\n\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=ConstantLength(): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopWhenChangeLess`(1e-8): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/#The-difference-of-convex-objective","page":"Difference of Convex","title":"The difference of convex objective","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"ManifoldDifferenceOfConvexObjective","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.ManifoldDifferenceOfConvexObjective","page":"Difference of Convex","title":"Manopt.ManifoldDifferenceOfConvexObjective","text":"ManifoldDifferenceOfConvexObjective{E} <: AbstractManifoldCostObjective{E}\n\nSpecify an objective for a difference_of_convex_algorithm.\n\nThe objective f mathcal M ℝ is given as\n\n f(p) = g(p) - h(p)\n\nwhere both g and h are convex, lower semicontinuous and proper. Furthermore the subdifferential h of h is required.\n\nFields\n\ncost: an implementation of f(p) = g(p)-h(p) as a function f(M,p).\n∂h!!: a deterministic version of h mathcal M Tmathcal M, in the sense that calling ∂h(M, p) returns a subgradient of h at p and if there is more than one, it returns a deterministic choice.\n\nNote that the subdifferential might be given in two possible signatures\n\n∂h(M,p) which does an AllocatingEvaluation\n∂h!(M, X, p) which does an InplaceEvaluation in place of X.\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"as well as for the corresponding sub problem","category":"page"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"LinearizedDCCost\nLinearizedDCGrad","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.LinearizedDCCost","page":"Difference of Convex","title":"Manopt.LinearizedDCCost","text":"LinearizedDCCost\n\nA functor (M,q) → ℝ to represent the inner problem of a ManifoldDifferenceOfConvexObjective. This is a cost function of the form\n\n F_p_kX_k(p) = g(p) - X_k log_p_kp\n\nfor a point p_k and a tangent vector X_k at p_k (for example outer iterates) that are stored within this functor as well.\n\nFields\n\ng a function\npk a point on a manifold\nXk a tangent vector at pk\n\nBoth interim values can be set using set_parameter!(::LinearizedDCCost, ::Val{:p}, p) and set_parameter!(::LinearizedDCCost, ::Val{:X}, X), respectively.\n\nConstructor\n\nLinearizedDCCost(g, p, X)\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/#Manopt.LinearizedDCGrad","page":"Difference of Convex","title":"Manopt.LinearizedDCGrad","text":"LinearizedDCGrad\n\nA functor (M,X,p) → ℝ to represent the gradient of the inner problem of a ManifoldDifferenceOfConvexObjective. This is a gradient function of the form\n\n F_p_kX_k(p) = g(p) - X_k log_p_kp\n\nits gradient is given by using F=F_1(F_2(p)), where F_1(X) = X_kX and F_2(p) = log_p_kp and the chain rule as well as the adjoint differential of the logarithmic map with respect to its argument for D^*F_2(p)\n\n operatornamegrad F(q) = operatornamegrad f(q) - DF_2^*(q)X\n\nfor a point pk and a tangent vector Xk at pk (the outer iterates) that are stored within this functor as well\n\nFields\n\ngrad_g!! the gradient of g (see also LinearizedDCCost)\npk a point on a manifold\nXk a tangent vector at pk\n\nBoth interim values can be set using set_parameter!(::LinearizedDCGrad, ::Val{:p}, p) and set_parameter!(::LinearizedDCGrad, ::Val{:X}, X), respectively.\n\nConstructor\n\nLinearizedDCGrad(grad_g, p, X; evaluation=AllocatingEvaluation())\n\nWhere you specify whether grad_g is AllocatingEvaluation or InplaceEvaluation, while this function still provides both signatures.\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"ManifoldDifferenceOfConvexProximalObjective","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.ManifoldDifferenceOfConvexProximalObjective","page":"Difference of Convex","title":"Manopt.ManifoldDifferenceOfConvexProximalObjective","text":"ManifoldDifferenceOfConvexProximalObjective{E} <: Problem\n\nSpecify an objective difference_of_convex_proximal_point algorithm. The problem is of the form\n\n operatorname*argmin_pmathcal M g(p) - h(p)\n\nwhere both g and h are convex, lower semicontinuous and proper.\n\nFields\n\ncost: implementation of f(p) = g(p)-h(p)\ngradient: the gradient of the cost\ngrad_h!!: a function operatornamegradh mathcal M Tmathcal M,\n\nNote that both the gradients might be given in two possible signatures as allocating or in-place.\n\nConstructor\n\nManifoldDifferenceOfConvexProximalObjective(gradh; cost=nothing, gradient=nothing)\n\nan note that neither cost nor gradient are required for the algorithm, just for eventual debug or stopping criteria.\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"as well as for the corresponding sub problems","category":"page"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"ProximalDCCost\nProximalDCGrad","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.ProximalDCCost","page":"Difference of Convex","title":"Manopt.ProximalDCCost","text":"ProximalDCCost\n\nA functor (M, p) → ℝ to represent the inner cost function of a ManifoldDifferenceOfConvexProximalObjective. This is the cost function of the proximal map of g.\n\n F_p_k(p) = frac12λd_mathcal M(p_kp)^2 + g(p)\n\nfor a point pk and a proximal parameter λ.\n\nFields\n\ng - a function\npk - a point on a manifold\nλ - the prox parameter\n\nBoth interim values can be set using set_parameter!(::ProximalDCCost, ::Val{:p}, p) and set_parameter!(::ProximalDCCost, ::Val{:λ}, λ), respectively.\n\nConstructor\n\nProximalDCCost(g, p, λ)\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/#Manopt.ProximalDCGrad","page":"Difference of Convex","title":"Manopt.ProximalDCGrad","text":"ProximalDCGrad\n\nA functor (M,X,p) → ℝ to represent the gradient of the inner cost function of a ManifoldDifferenceOfConvexProximalObjective. This is the gradient function of the proximal map cost function of g. Based on\n\n F_p_k(p) = frac12λd_mathcal M(p_kp)^2 + g(p)\n\nit reads\n\n operatornamegrad F_p_k(p) = operatornamegrad g(p) - frac1λlog_p p_k\n\nfor a point pk and a proximal parameter λ.\n\nFields\n\ngrad_g - a gradient function\npk - a point on a manifold\nλ - the prox parameter\n\nBoth interim values can be set using set_parameter!(::ProximalDCGrad, ::Val{:p}, p) and set_parameter!(::ProximalDCGrad, ::Val{:λ}, λ), respectively.\n\nConstructor\n\nProximalDCGrad(grad_g, pk, λ; evaluation=AllocatingEvaluation())\n\nWhere you specify whether grad_g is AllocatingEvaluation or InplaceEvaluation, while this function still always provides both signatures.\n\n\n\n\n\n","category":"type"},{"location":"solvers/difference_of_convex/#Helper-functions","page":"Difference of Convex","title":"Helper functions","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"get_subtrahend_gradient","category":"page"},{"location":"solvers/difference_of_convex/#Manopt.get_subtrahend_gradient","page":"Difference of Convex","title":"Manopt.get_subtrahend_gradient","text":"X = get_subtrahend_gradient(amp, q)\nget_subtrahend_gradient!(amp, X, q)\n\nEvaluate the (sub)gradient of the subtrahend h from within a ManifoldDifferenceOfConvexObjective amp at the point q (in place of X).\n\nThe evaluation is done in place of X for the !-variant. The T=AllocatingEvaluation problem might still allocate memory within. When the non-mutating variant is called with a T=InplaceEvaluation memory for the result is allocated.\n\n\n\n\n\nX = get_subtrahend_gradient(M::AbstractManifold, dcpo::ManifoldDifferenceOfConvexProximalObjective, p)\nget_subtrahend_gradient!(M::AbstractManifold, X, dcpo::ManifoldDifferenceOfConvexProximalObjective, p)\n\nEvaluate the gradient of the subtrahend h from within a ManifoldDifferenceOfConvexProximalObjectivePat the pointp` (in place of X).\n\n\n\n\n\n","category":"function"},{"location":"solvers/difference_of_convex/#sec-cp-technical-details","page":"Difference of Convex","title":"Technical details","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"The difference_of_convex_algorithm and difference_of_convex_proximal_point solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= or retraction_method_dual= (for mathcal N) does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified.","category":"page"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"By default, one of the stopping criteria is StopWhenChangeLess, which either requires","category":"page"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= or retraction_method_dual= (for mathcal N) does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified or the distance(M, p, q) for said default inverse retraction.\nA `copyto!(M, q, p) and copy(M,p) for points.\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).\neverything the subsolver requires, which by default is the trust_regions or if you do not provide a Hessian gradient_descent.","category":"page"},{"location":"solvers/difference_of_convex/#Literature","page":"Difference of Convex","title":"Literature","text":"","category":"section"},{"location":"solvers/difference_of_convex/","page":"Difference of Convex","title":"Difference of Convex","text":"Y. T. Almeida, J. X. Cruz Neto, P. R. Oliveira and J. C. Oliveira Souza. A modified proximal point method for DC functions on Hadamard manifolds. Computational Optimization and Applications 76, 649–673 (2020).\n\n\n\nR. Bergmann, O. P. Ferreira, E. M. Santos and J. C. Souza. The difference of convex algorithm on Hadamard manifolds, arXiv preprint (2023).\n\n\n\nJ. C. Souza and P. R. Oliveira. A proximal point algorithm for DC fuctions on Hadamard manifolds. Journal of Global Optimization 63, 797–810 (2015).\n\n\n\n","category":"page"},{"location":"solvers/interior_point_Newton/#Interior-point-Newton-method","page":"Interior Point Newton","title":"Interior point Newton method","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"interior_point_Newton\ninterior_point_Newton!","category":"page"},{"location":"solvers/interior_point_Newton/#Manopt.interior_point_Newton","page":"Interior Point Newton","title":"Manopt.interior_point_Newton","text":"interior_point_Newton(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\ninterior_point_Newton(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\ninterior_point_Newton!(M, f, grad]_f, Hess_f, p; kwargs...)\ninterior_point_Newton(M, ConstrainedManifoldObjective, p; kwargs...)\n\nperform the interior point Newton method following [LY24].\n\nIn order to solve the constrained problem\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nThis algorithms iteratively solves the linear system based on extending the KKT system by a slack variable s.\n\noperatornameJ F(p μ λ s)X Y Z W = -F(p μ λ s)\ntext where \nX T_pmathcal M YW ℝ^m Z ℝ^n\n\nsee CondensedKKTVectorFieldJacobian and CondensedKKTVectorField, respectively, for the reduced form, this is usually solved in. From the resulting X and Z in the reeuced form, the other two, Y, W, are then computed.\n\nFrom the gradient (XYZW) at the current iterate (p μ λ s), a line search is performed using the KKTVectorFieldNormSq norm of the KKT vector field (squared) and its gradient KKTVectorFieldNormSqGradient together with the InteriorPointCentralityCondition.\n\nNote that since the vector field F includes the gradients of the constraint functions g h, its gradient or Jacobian requires the Hessians of the constraints.\n\nFor that seach direction a line search is performed, that additionally ensures that the constraints are further fulfilled.\n\nInput\n\nM: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nor a ConstrainedManifoldObjective cmo containing f, grad_f, Hess_f, and the constraints\n\nKeyword arguments\n\nThe keyword arguments related to the constraints (the first eleven) are ignored if you pass a ConstrainedManifoldObjective cmo\n\ncentrality_condition=missing; an additional condition when to accept a step size. This can be used to ensure that the resulting iterate is still an interior point if you provide a check (N,q) -> true/false, where N is the manifold of the step_problem.\nequality_constraints=nothing: the number n of equality constraints.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ng=nothing: the inequality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\ngradient_range=nothing: specify how gradients are represented, where nothing is equivalent to NestedPowerRepresentation\ngradient_equality_range=gradient_range: specify how the gradients of the equality constraints are represented\ngradient_inequality_range=gradient_range: specify how the gradients of the inequality constraints are represented\nh=nothing: the equality constraints\nHess_g=nothing: the Hessian of the inequality constraints\nHess_h=nothing: the Hessian of the equality constraints\ninequality_constraints=nothing: the number m of inequality constraints.\nλ=ones(length(h(M, p))): the Lagrange multiplier with respect to the equality constraints h\nμ=ones(length(g(M, p))): the Lagrange multiplier with respect to the inequality constraints g\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nρ=μ's / length(μ): store the orthogonality μ's/m to compute the barrier parameter β in the sub problem.\ns=copy(μ): initial value for the slack variables\nσ=calculate_σ(M, cmo, p, μ, λ, s): scaling factor for the barrier parameter β in the sub problem, which is updated during the iterations\nstep_objective: a ManifoldGradientObjective of the norm of the KKT vector field KKTVectorFieldNormSq and its gradient KKTVectorFieldNormSqGradient\nstep_problem: the manifold mathcal M ℝ^m ℝ^n ℝ^m together with the step_objective as the problem the linesearch stepsize= employs for determining a step size\nstep_state: the StepsizeState with point and search direction\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size with the centrality_condtion keyword as additional criterion to accept a step, if this is provided\nstopping_criterion=StopAfterIteration(200)|StopWhenKKTResidualLess(1e-8): a functor indicating that the stopping criterion is fulfilled a stopping criterion, by default depending on the residual of the KKT vector field or a maximal number of steps, which ever hits first.\nsub_kwargs=(;): keyword arguments to decorate the sub options, for example debug, that automatically respects the main solvers debug options (like sub-sampling) as well\nsub_objective: The SymmetricLinearSystemObjective modelling the system of equations to use in the sub solver, includes the CondensedKKTVectorFieldJacobian mathcal A(X) and the CondensedKKTVectorField b in mathcal A(X) + b = 0 we aim to solve. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_stopping_criterion=StopAfterIteration(manifold_dimension(M))|StopWhenRelativeResidualLess(c,1e-8), where c = lVert b rVert_ from the system to solve. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=ConjugateResidualState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nvector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components ℝ^mℝ^n\nX=zero_vector(M,p): th initial gradient with respect to p.\nY=zero(μ): the initial gradient with respct to μ\nZ=zero(λ): the initial gradient with respct to λ\nW=zero(s): the initial gradient with respct to s\n\nAs well as internal keywords used to set up these given keywords like _step_M, _step_p, _sub_M, _sub_p, and _sub_X, that should not be changed.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective, respectively.\n\nnote: Note\nThe centrality_condition=mising disables to check centrality during the line search, but you can pass InteriorPointCentralityCondition(cmo, γ), where γ is a constant, to activate this check.\n\nOutput\n\nThe obtained approximate constrained minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/interior_point_Newton/#Manopt.interior_point_Newton!","page":"Interior Point Newton","title":"Manopt.interior_point_Newton!","text":"interior_point_Newton(M, f, grad_f, Hess_f, p=rand(M); kwargs...)\ninterior_point_Newton(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\ninterior_point_Newton!(M, f, grad]_f, Hess_f, p; kwargs...)\ninterior_point_Newton(M, ConstrainedManifoldObjective, p; kwargs...)\n\nperform the interior point Newton method following [LY24].\n\nIn order to solve the constrained problem\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nThis algorithms iteratively solves the linear system based on extending the KKT system by a slack variable s.\n\noperatornameJ F(p μ λ s)X Y Z W = -F(p μ λ s)\ntext where \nX T_pmathcal M YW ℝ^m Z ℝ^n\n\nsee CondensedKKTVectorFieldJacobian and CondensedKKTVectorField, respectively, for the reduced form, this is usually solved in. From the resulting X and Z in the reeuced form, the other two, Y, W, are then computed.\n\nFrom the gradient (XYZW) at the current iterate (p μ λ s), a line search is performed using the KKTVectorFieldNormSq norm of the KKT vector field (squared) and its gradient KKTVectorFieldNormSqGradient together with the InteriorPointCentralityCondition.\n\nNote that since the vector field F includes the gradients of the constraint functions g h, its gradient or Jacobian requires the Hessians of the constraints.\n\nFor that seach direction a line search is performed, that additionally ensures that the constraints are further fulfilled.\n\nInput\n\nM: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\n\nor a ConstrainedManifoldObjective cmo containing f, grad_f, Hess_f, and the constraints\n\nKeyword arguments\n\nThe keyword arguments related to the constraints (the first eleven) are ignored if you pass a ConstrainedManifoldObjective cmo\n\ncentrality_condition=missing; an additional condition when to accept a step size. This can be used to ensure that the resulting iterate is still an interior point if you provide a check (N,q) -> true/false, where N is the manifold of the step_problem.\nequality_constraints=nothing: the number n of equality constraints.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ng=nothing: the inequality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\ngradient_range=nothing: specify how gradients are represented, where nothing is equivalent to NestedPowerRepresentation\ngradient_equality_range=gradient_range: specify how the gradients of the equality constraints are represented\ngradient_inequality_range=gradient_range: specify how the gradients of the inequality constraints are represented\nh=nothing: the equality constraints\nHess_g=nothing: the Hessian of the inequality constraints\nHess_h=nothing: the Hessian of the equality constraints\ninequality_constraints=nothing: the number m of inequality constraints.\nλ=ones(length(h(M, p))): the Lagrange multiplier with respect to the equality constraints h\nμ=ones(length(g(M, p))): the Lagrange multiplier with respect to the inequality constraints g\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nρ=μ's / length(μ): store the orthogonality μ's/m to compute the barrier parameter β in the sub problem.\ns=copy(μ): initial value for the slack variables\nσ=calculate_σ(M, cmo, p, μ, λ, s): scaling factor for the barrier parameter β in the sub problem, which is updated during the iterations\nstep_objective: a ManifoldGradientObjective of the norm of the KKT vector field KKTVectorFieldNormSq and its gradient KKTVectorFieldNormSqGradient\nstep_problem: the manifold mathcal M ℝ^m ℝ^n ℝ^m together with the step_objective as the problem the linesearch stepsize= employs for determining a step size\nstep_state: the StepsizeState with point and search direction\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size with the centrality_condtion keyword as additional criterion to accept a step, if this is provided\nstopping_criterion=StopAfterIteration(200)|StopWhenKKTResidualLess(1e-8): a functor indicating that the stopping criterion is fulfilled a stopping criterion, by default depending on the residual of the KKT vector field or a maximal number of steps, which ever hits first.\nsub_kwargs=(;): keyword arguments to decorate the sub options, for example debug, that automatically respects the main solvers debug options (like sub-sampling) as well\nsub_objective: The SymmetricLinearSystemObjective modelling the system of equations to use in the sub solver, includes the CondensedKKTVectorFieldJacobian mathcal A(X) and the CondensedKKTVectorField b in mathcal A(X) + b = 0 we aim to solve. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_stopping_criterion=StopAfterIteration(manifold_dimension(M))|StopWhenRelativeResidualLess(c,1e-8), where c = lVert b rVert_ from the system to solve. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=ConjugateResidualState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nvector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components ℝ^mℝ^n\nX=zero_vector(M,p): th initial gradient with respect to p.\nY=zero(μ): the initial gradient with respct to μ\nZ=zero(λ): the initial gradient with respct to λ\nW=zero(s): the initial gradient with respct to s\n\nAs well as internal keywords used to set up these given keywords like _step_M, _step_p, _sub_M, _sub_p, and _sub_X, that should not be changed.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective, respectively.\n\nnote: Note\nThe centrality_condition=mising disables to check centrality during the line search, but you can pass InteriorPointCentralityCondition(cmo, γ), where γ is a constant, to activate this check.\n\nOutput\n\nThe obtained approximate constrained minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/interior_point_Newton/#State","page":"Interior Point Newton","title":"State","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"InteriorPointNewtonState","category":"page"},{"location":"solvers/interior_point_Newton/#Manopt.InteriorPointNewtonState","page":"Interior Point Newton","title":"Manopt.InteriorPointNewtonState","text":"InteriorPointNewtonState{P,T} <: AbstractHessianSolverState\n\nFields\n\nλ: the Lagrange multiplier with respect to the equality constraints\nμ: the Lagrange multiplier with respect to the inequality constraints\np::P: a point on the manifold mathcal Mstoring the current iterate\ns: the current slack variable\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nX: the current gradient with respect to p\nY: the current gradient with respect to μ\nZ: the current gradient with respect to λ\nW: the current gradient with respect to s\nρ: store the orthogonality μ's/m to compute the barrier parameter β in the sub problem\nσ: scaling factor for the barrier parameter β in the sub problem\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstep_problem: an AbstractManoptProblem storing the manifold and objective for the line search\nstep_state: storing iterate and search direction in a state for the line search, see StepsizeState\n\nConstructor\n\nInteriorPointNewtonState(\n M::AbstractManifold,\n cmo::ConstrainedManifoldObjective,\n sub_problem::Pr,\n sub_state::St;\n kwargs...\n)\n\nInitialize the state, where both the AbstractManifold and the ConstrainedManifoldObjective are used to fill in reasonable defaults for the keywords.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\ncmo: a ConstrainedManifoldObjective\nsub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nKeyword arguments\n\nLet m and n denote the number of inequality and equality constraints, respectively\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nμ=ones(m)\nX=zero_vector(M,p)\nY=zero(μ)\nλ=zeros(n)\nZ=zero(λ)\ns=ones(m)\nW=zero(s)\nρ=μ's/m\nσ=calculate_σ(M, cmo, p, μ, λ, s)\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstep_objective=ManifoldGradientObjective(KKTVectorFieldNormSq(cmo), KKTVectorFieldNormSqGradient(cmo); evaluation=InplaceEvaluation())\nvector_space=Rn: a function that, given an integer, returns the manifold to be used for the vector space components ℝ^mℝ^n\nstep_problem: wrap the manifold mathcal M ℝ^m ℝ^n ℝ^m\nstep_state: the StepsizeState with point and search direction\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size with the InteriorPointCentralityCondition as additional condition to accept a step\n\nand internally _step_M and _step_p for the manifold and point in the stepsize.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Subproblem-functions","page":"Interior Point Newton","title":"Subproblem functions","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"CondensedKKTVectorField\nCondensedKKTVectorFieldJacobian\nKKTVectorField\nKKTVectorFieldJacobian\nKKTVectorFieldAdjointJacobian\nKKTVectorFieldNormSq\nKKTVectorFieldNormSqGradient","category":"page"},{"location":"solvers/interior_point_Newton/#Manopt.CondensedKKTVectorField","page":"Interior Point Newton","title":"Manopt.CondensedKKTVectorField","text":"CondensedKKTVectorField{O<:ConstrainedManifoldObjective,T,R} <: AbstractConstrainedSlackFunctor{T,R}\n\nGiven the constrained optimization problem\n\nbeginaligned\nmin_p mathcalM f(p)\ntextsubject to g_i(p)leq 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nThen reformulating the KKT conditions of the Lagrangian from the optimality conditions of the Lagrangian\n\nmathcal L(p μ λ) = f(p) + sum_j=1^n λ_jh_j(p) + sum_i=1^m μ_ig_i(p)\n\nin a perturbed / barrier method in a condensed form using a slack variable s ℝ^m and a barrier parameter β and the Riemannian gradient of the Lagrangian with respect to the first parameter operatornamegrad_p L(p μ λ).\n\nLet mathcal N = mathcal M ℝ^n. We obtain the linear system\n\nmathcal A(pλ)XY = -b(pλ)qquad textwhere (XY) T_(pλ)mathcal N\n\nwhere mathcal A T_(pλ)mathcal N T_(pλ)mathcal N is a linear operator and this struct models the right hand side b(pλ) T_(pλ)mathcal M given by\n\nb(pλ) = beginpmatrix\noperatornamegrad f(p)\n+ displaystylesum_j=1^n λ_j operatornamegrad h_j(p)\n+ displaystylesum_i=1^m μ_i operatornamegrad g_i(p)\n+ displaystylesum_i=1^m fracμ_is_ibigl(\n μ_i(g_i(p)+s_i) + β - μ_is_i\nbigr)operatornamegrad g_i(p)\nh(p)\nendpmatrix\n\nFields\n\ncmo the ConstrainedManifoldObjective\nμ::T the vector in ℝ^m of coefficients for the inequality constraints\ns::T the vector in ℝ^m of sclack variables\nβ::R the barrier parameter βℝ\n\nConstructor\n\nCondensedKKTVectorField(cmo, μ, s, β)\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.CondensedKKTVectorFieldJacobian","page":"Interior Point Newton","title":"Manopt.CondensedKKTVectorFieldJacobian","text":"CondensedKKTVectorFieldJacobian{O<:ConstrainedManifoldObjective,T,R} <: AbstractConstrainedSlackFunctor{T,R}\n\nGiven the constrained optimization problem\n\nbeginaligned\nmin_p mathcalM f(p)\ntextsubject to g_i(p)leq 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nwe reformulate the KKT conditions of the Lagrangian from the optimality conditions of the Lagrangian\n\nmathcal L(p μ λ) = f(p) + sum_j=1^n λ_jh_j(p) + sum_i=1^m μ_ig_i(p)\n\nin a perturbed / barrier method enhanced as well as condensed form as using operatornamegrad_o L(p μ λ) the Riemannian gradient of the Lagrangian with respect to the first parameter.\n\nLet mathcal N = mathcal M ℝ^n. We obtain the linear system\n\nmathcal A(pλ)XY = -b(pλ)qquad textwhere X T_pmathcal M Y ℝ^n\n\nwhere mathcal A T_(pλ)mathcal N T_(pλ)mathcal N is a linear operator on T_(pλ)mathcal N = T_pmathcal M ℝ^n given by\n\nmathcal A(pλ)XY = beginpmatrix\noperatornameHess_pmathcal L(p μ λ)X\n+ displaystylesum_i=1^m fracμ_is_ioperatornamegrad g_i(p) Xoperatornamegrad g_i(p)\n+ displaystylesum_j=1^n Y_j operatornamegrad h_j(p)\n\nBigl( operatornamegrad h_j(p) X Bigr)_j=1^n\nendpmatrix\n\nFields\n\ncmo the ConstrainedManifoldObjective\nμ::V the vector in ℝ^m of coefficients for the inequality constraints\ns::V the vector in ℝ^m of slack variables\nβ::R the barrier parameter βℝ\n\nConstructor\n\nCondensedKKTVectorFieldJacobian(cmo, μ, s, β)\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.KKTVectorField","page":"Interior Point Newton","title":"Manopt.KKTVectorField","text":"KKTVectorField{O<:ConstrainedManifoldObjective}\n\nImplement the vectorfield F KKT-conditions, inlcuding a slack variable for the inequality constraints.\n\nGiven the LagrangianCost\n\nmathcal L(p μ λ) = f(p) + sum_i=1^m μ_ig_i(p) + sum_j=1^n λ_jh_j(p)\n\nthe LagrangianGradient\n\noperatornamegradmathcal L(p μ λ) = operatornamegradf(p) + sum_j=1^n λ_j operatornamegrad h_j(p) + sum_i=1^m μ_i operatornamegrad g_i(p)\n\nand introducing the slack variables s=-g(p) ℝ^m the vector field is given by\n\nF(p μ λ s) = beginpmatrix\noperatornamegrad_p mathcal L(p μ λ)\ng(p) + s\nh(p)\nμ s\nendpmatrix text where p in mathcal M μ s in ℝ^mtext and λ in ℝ^n\n\nwhere denotes the Hadamard (or elementwise) product\n\nFields\n\ncmo the ConstrainedManifoldObjective\n\nWhile the point p is arbitrary and usually not needed, it serves as internal memory in the computations. Furthermore Both fields together also calrify the product manifold structure to use.\n\nConstructor\n\nKKTVectorField(cmo::ConstrainedManifoldObjective)\n\nExample\n\nDefine F = KKTVectorField(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of mathcal Mℝ^mℝ^nℝ^m. Then, you can call this cost as F(N, q) or as the in-place variant F(N, Y, q), where q is a point on N and Y is a tangent vector at q for the result.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.KKTVectorFieldJacobian","page":"Interior Point Newton","title":"Manopt.KKTVectorFieldJacobian","text":"KKTVectorFieldJacobian{O<:ConstrainedManifoldObjective}\n\nImplement the Jacobian of the vector field F of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField and KKTVectorFieldAdjointJacobian..\n\noperatornameJ F(p μ λ s)X Y Z W = beginpmatrix\n operatornameHess_p mathcal L(p μ λ)X + displaystylesum_i=1^m Y_i operatornamegrad g_i(p) + displaystylesum_j=1^n Z_j operatornamegrad h_j(p)\n Bigl( operatornamegrad g_i(p) X + W_iBigr)_i=1^m\n Bigl( operatornamegrad h_j(p) X Bigr)_j=1^n\n μ W + s Y\nendpmatrix\n\nwhere denotes the Hadamard (or elementwise) product\n\nSee also the LagrangianHessian operatornameHess_p mathcal L(p μ λ)X.\n\nFields\n\ncmo the ConstrainedManifoldObjective\n\nConstructor\n\nKKTVectorFieldJacobian(cmo::ConstrainedManifoldObjective)\n\nGenerate the Jacobian of the KKT vector field related to some ConstrainedManifoldObjective cmo.\n\nExample\n\nDefine JF = KKTVectorFieldJacobian(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of mathcal Mℝ^mℝ^nℝ^m. Then, you can call this cost as JF(N, q, Y) or as the in-place variant JF(N, Z, q, Y), where q is a point on N and Y and Z are a tangent vector at q.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.KKTVectorFieldAdjointJacobian","page":"Interior Point Newton","title":"Manopt.KKTVectorFieldAdjointJacobian","text":"KKTVectorFieldAdjointJacobian{O<:ConstrainedManifoldObjective}\n\nImplement the Adjoint of the Jacobian of the vector field F of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField and KKTVectorFieldJacobian.\n\noperatornameJ^* F(p μ λ s)X Y Z W = beginpmatrix\n operatornameHess_p mathcal L(p μ λ)X + displaystylesum_i=1^m Y_i operatornamegrad g_i(p) + displaystylesum_j=1^n Z_j operatornamegrad h_j(p)\n Bigl( operatornamegrad g_i(p) X + s_iW_iBigr)_i=1^m\n Bigl( operatornamegrad h_j(p) X Bigr)_j=1^n\n μ W + Y\nendpmatrix\n\nwhere denotes the Hadamard (or elementwise) product\n\nSee also the LagrangianHessian operatornameHess_p mathcal L(p μ λ)X.\n\nFields\n\ncmo the ConstrainedManifoldObjective\n\nConstructor\n\nKKTVectorFieldAdjointJacobian(cmo::ConstrainedManifoldObjective)\n\nGenerate the Adjoint Jacobian of the KKT vector field related to some ConstrainedManifoldObjective cmo.\n\nExample\n\nDefine AdJF = KKTVectorFieldAdjointJacobian(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of mathcal Mℝ^mℝ^nℝ^m. Then, you can call this cost as AdJF(N, q, Y) or as the in-place variant AdJF(N, Z, q, Y), where q is a point on N and Y and Z are a tangent vector at q.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.KKTVectorFieldNormSq","page":"Interior Point Newton","title":"Manopt.KKTVectorFieldNormSq","text":"KKTVectorFieldNormSq{O<:ConstrainedManifoldObjective}\n\nImplement the square of the norm of the vectorfield F of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField, where this functor applies the norm to. In [LY24] this is called the merit function.\n\nFields\n\ncmo the ConstrainedManifoldObjective\n\nConstructor\n\nKKTVectorFieldNormSq(cmo::ConstrainedManifoldObjective)\n\nExample\n\nDefine f = KKTVectorFieldNormSq(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of mathcal Mℝ^mℝ^nℝ^m. Then, you can call this cost as f(N, q), where q is a point on N.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.KKTVectorFieldNormSqGradient","page":"Interior Point Newton","title":"Manopt.KKTVectorFieldNormSqGradient","text":"KKTVectorFieldNormSqGradient{O<:ConstrainedManifoldObjective}\n\nCompute the gradient of the KKTVectorFieldNormSq φ(pμλs) = lVert F(pμλs)rVert^2, that is of the norm squared of the KKTVectorField F.\n\nThis is given in [LY24] as the gradient of their merit function, which we can write with the adjoint J^* of the Jacobian\n\noperatornamegrad φ = 2operatornameJ^* F(p μ λ s)F(p μ λ s)\n\nand hence is computed with KKTVectorFieldAdjointJacobian and KKTVectorField.\n\nFor completeness, the gradient reads, using the LagrangianGradient L = operatornamegrad_p mathcal L(pμλ) T_pmathcal M, for a shorthand of the first component of F, as\n\noperatornamegrad φ\n=\n2 beginpmatrix\noperatornamegrad_p mathcal L(pμλ)L + (g_i(p) + s_i)operatornamegrad g_i(p) + h_j(p)operatornamegrad h_j(p)\n Bigl( operatornamegrad g_i(p) L + s_iBigr)_i=1^m + μ s s\n Bigl( operatornamegrad h_j(p) L Bigr)_j=1^n\n g + s + μ μ s\nendpmatrix\n\nwhere denotes the Hadamard (or elementwise) product.\n\nFields\n\ncmo the ConstrainedManifoldObjective\n\nConstructor\n\nKKTVectorFieldNormSqGradient(cmo::ConstrainedManifoldObjective)\n\nExample\n\nDefine grad_f = KKTVectorFieldNormSqGradient(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of mathcal Mℝ^mℝ^nℝ^m. Then, you can call this cost as grad_f(N, q) or as the in-place variant grad_f(N, Y, q), where q is a point on N and Y is a tangent vector at q returning the resulting gradient at.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Helpers","page":"Interior Point Newton","title":"Helpers","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"InteriorPointCentralityCondition\nManopt.calculate_σ","category":"page"},{"location":"solvers/interior_point_Newton/#Manopt.InteriorPointCentralityCondition","page":"Interior Point Newton","title":"Manopt.InteriorPointCentralityCondition","text":"InteriorPointCentralityCondition{CO,R}\n\nA functor to check the centrality condition.\n\nIn order to obtain a step in the linesearch performed within the interior_point_Newton, Section 6 of [LY24] propose the following additional conditions to hold inspired by the Euclidean case described in Section 6 [ETTZ96]:\n\nFor a given ConstrainedManifoldObjective assume consider the KKTVectorField F, that is we are at a point q = (p λ μ s) on mathcal M ℝ^m ℝ^n ℝ^mand a search direction V = (X Y Z W).\n\nThen, let\n\nτ_1 = fracmmin μ sμ^mathrmTs\nquadtext and quad\nτ_2 = fracμ^mathrmTslVert F(q) rVert\n\nwhere denotes the Hadamard (or elementwise) product.\n\nFor a new candidate q(α) = bigl(p(α) λ(α) μ(α) s(α)bigr) = (operatornameretr_p(αX) λ+αY μ+αZ s+αW), we then define two functions\n\nc_1(α) = min μ(α) s(α) - fracγτ_1 μ(α)^mathrmTs(α)m\nquadtext and quad\nc_2(α) = μ(α)^mathrmTs(α) γτ_2 lVert F(q(α)) rVert\n\nWhile the paper now states that the (Armijo) linesearch starts at a point tilde α, it is easier to include the condition that c_1(α) 0 and c_2(α) 0 into the linesearch as well.\n\nThe functor InteriorPointCentralityCondition(cmo, γ, μ, s, normKKT)(N,qα) defined here evaluates this condition and returns true if both c_1 and c_2 are nonnegative.\n\nFields\n\ncmo: a ConstrainedManifoldObjective\nγ: a constant\nτ1, τ2: the constants given in the formula.\n\nConstructor\n\nInteriorPointCentralityCondition(cmo, γ)\nInteriorPointCentralityCondition(cmo, γ, τ1, τ2)\n\nInitialise the centrality conditions. The parameters τ1, τ2 are initialise to zero if not provided.\n\nnote: Note\nBesides get_parameter for all three constants, and set_parameter! for γ, to update τ_1 and τ_2, call set_parameter(ipcc, :τ, N, q) to update both τ_1 and τ_2 according to the formulae above.\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#Manopt.calculate_σ","page":"Interior Point Newton","title":"Manopt.calculate_σ","text":"calculate_σ(M, cmo, p, μ, λ, s; kwargs...)\n\nCompute the new σ factor for the barrier parameter in interior_point_Newton as\n\nminfrac12 lVert F(p μ λ s)rVert^frac12 \n\nwhere F is the KKT vector field, hence the KKTVectorFieldNormSq is used.\n\nKeyword arguments\n\nvector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components ℝ^mℝ^n\nN the manifold mathcal M ℝ^m ℝ^n ℝ^m the vector field lives on (generated using vector_space)\nq provide memory on N for interims evaluation of the vector field\n\n\n\n\n\n","category":"function"},{"location":"solvers/interior_point_Newton/#Additional-stopping-criteria","page":"Interior Point Newton","title":"Additional stopping criteria","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"StopWhenKKTResidualLess","category":"page"},{"location":"solvers/interior_point_Newton/#Manopt.StopWhenKKTResidualLess","page":"Interior Point Newton","title":"Manopt.StopWhenKKTResidualLess","text":"StopWhenKKTResidualLess <: StoppingCriterion\n\nStop when the KKT residual\n\nr^2\n= \\lVert \\operatorname{grad}_p \\mathcal L(p, μ, λ) \\rVert^2\n+ \\sum_{i=1}^m [μ_i]_{-}^2 + [g_i(p)]_+^2 + \\lvert \\mu_ig_i(p)^2\n+ \\sum_{j=1}^n \\lvert h_i(p)\\rvert^2.\n\nis less than a given threshold r ε. We use v_+ = max0v and v_- = min0t for the positive and negative part of v, respectively\n\nFields\n\nε: a threshold\nresidual: store the last residual if the stopping criterion is hit.\nat_iteration:\n\n\n\n\n\n","category":"type"},{"location":"solvers/interior_point_Newton/#References","page":"Interior Point Newton","title":"References","text":"","category":"section"},{"location":"solvers/interior_point_Newton/","page":"Interior Point Newton","title":"Interior Point Newton","text":"A. S. El-Bakry, R. A. Tapia, T. Tsuchiya and Y. Zhang. On the formulation and theory of the Newton interior-point method for nonlinear programming. Journal of Optimization Theory and Applications 89, 507–541 (1996).\n\n\n\nZ. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.\n\n\n\n","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/#solver-pdrssn","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton algorithm","text":"","category":"section"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"The Primal-dual Riemannian semismooth Newton Algorithm is a second-order method derived from the ChambollePock.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"The aim is to solve an optimization problem on a manifold with a cost function of the form","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"F(p) + G(Λ(p))","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"where Fmathcal M overlineℝ, Gmathcal N overlineℝ, and Λmathcal M mathcal N. If the manifolds mathcal M or mathcal N are not Hadamard, it has to be considered locally only, that is on geodesically convex sets mathcal C subset mathcal M and mathcal D subsetmathcal N such that Λ(mathcal C) subset mathcal D.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"The algorithm comes down to applying the Riemannian semismooth Newton method to the rewritten primal-dual optimality conditions. Define the vector field X mathcalM times mathcalT_n^* mathcalN rightarrow mathcalT mathcalM times mathcalT_n^* mathcalN as","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"Xleft(p xi_nright)=left(beginarrayc\n-log _p operatornameprox_sigma Fleft(exp _pleft(mathcalP_p leftarrow mleft(-sigmaleft(D_m Lambdaright)^*leftmathcalP_Lambda(m) leftarrow n xi_nrightright)^sharpright)right) \nxi_n-operatornameprox_tau G_n^*left(xi_n+tauleft(mathcalP_n leftarrow Lambda(m) D_m Lambdaleftlog _m prightright)^flatright)\nendarrayright)","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"and solve for X(pξ_n)=0.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"Given base points mmathcal C, n=Λ(m)mathcal D, initial primal and dual values p^(0) mathcal C, ξ_n^(0) mathcal T_n^*mathcal N, and primal and dual step sizes sigma, tau.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"The algorithms performs the steps k=1 (until a StoppingCriterion is reached)","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"Choose any element\nV^(k) _C X(p^(k)ξ_n^(k))\nof the Clarke generalized covariant derivative\nSolve\nV^(k) (d_p^(k) d_n^(k)) = - X(p^(k)ξ_n^(k))\nin the vector space mathcalT_p^(k) mathcalM times mathcalT_n^* mathcalN\nUpdate\np^(k+1) = exp_p^(k)(d_p^(k))\nand\nξ_n^(k+1) = ξ_n^(k) + d_n^(k)","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"Furthermore you can exchange the exponential map, the logarithmic map, and the parallel transport by a retraction, an inverse retraction and a vector transport.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"Finally you can also update the base points m and n during the iterations. This introduces a few additional vector transports. The same holds for the case that Λ(m^(k))neq n^(k) at some point. All these cases are covered in the algorithm.","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"primal_dual_semismooth_Newton\nprimal_dual_semismooth_Newton!","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/#Manopt.primal_dual_semismooth_Newton","page":"Primal-dual Riemannian semismooth Newton","title":"Manopt.primal_dual_semismooth_Newton","text":"primal_dual_semismooth_Newton(M, N, cost, p, X, m, n, prox_F, diff_prox_F, prox_G_dual, diff_prox_dual_G, linearized_operator, adjoint_linearized_operator)\n\nPerform the Primal-Dual Riemannian semismooth Newton algorithm.\n\nGiven a cost function mathcal E mathcal M overlineℝ of the form\n\nmathcal E(p) = F(p) + G( Λ(p) )\n\nwhere F mathcal M overlineℝ, G mathcal N overlineℝ, and Λ mathcal M mathcal N. The remaining input parameters are\n\np, X: primal and dual start points pmathcal M and X T_nmathcal N\nm,n: base points on mathcal M and `\\mathcal N, respectively.\nlinearized_forward_operator: the linearization DΛ() of the operator Λ().\nadjoint_linearized_operator: the adjoint DΛ^* of the linearized operator DΛ(m) T_mmathcal M T_Λ(m)mathcal N\nprox_F, prox_G_Dual: the proximal maps of F and G^ast_n\ndiff_prox_F, diff_prox_dual_G: the (Clarke Generalized) differentials of the proximal maps of F and G^ast_n\n\nFor more details on the algorithm, see [DL21].\n\nKeyword arguments\n\ndual_stepsize=1/sqrt(8): proximal parameter of the dual prox\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nΛ=missing: the exact operator, that is required if Λ(m)=n does not hold; missing indicates, that the forward operator is exact.\nprimal_stepsize=1/sqrt(8): proximal parameter of the primal prox\nreg_param=1e-5: regularisation parameter for the Newton matrix Note that this changes the arguments the forward_operator is called.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(50): a functor indicating that the stopping criterion is fulfilled\nupdate_primal_base=missing: function to update m (identity by default/missing)\nupdate_dual_base=missing: function to update n (identity by default/missing)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/primal_dual_semismooth_Newton/#Manopt.primal_dual_semismooth_Newton!","page":"Primal-dual Riemannian semismooth Newton","title":"Manopt.primal_dual_semismooth_Newton!","text":"primal_dual_semismooth_Newton(M, N, cost, p, X, m, n, prox_F, diff_prox_F, prox_G_dual, diff_prox_dual_G, linearized_operator, adjoint_linearized_operator)\n\nPerform the Primal-Dual Riemannian semismooth Newton algorithm.\n\nGiven a cost function mathcal E mathcal M overlineℝ of the form\n\nmathcal E(p) = F(p) + G( Λ(p) )\n\nwhere F mathcal M overlineℝ, G mathcal N overlineℝ, and Λ mathcal M mathcal N. The remaining input parameters are\n\np, X: primal and dual start points pmathcal M and X T_nmathcal N\nm,n: base points on mathcal M and `\\mathcal N, respectively.\nlinearized_forward_operator: the linearization DΛ() of the operator Λ().\nadjoint_linearized_operator: the adjoint DΛ^* of the linearized operator DΛ(m) T_mmathcal M T_Λ(m)mathcal N\nprox_F, prox_G_Dual: the proximal maps of F and G^ast_n\ndiff_prox_F, diff_prox_dual_G: the (Clarke Generalized) differentials of the proximal maps of F and G^ast_n\n\nFor more details on the algorithm, see [DL21].\n\nKeyword arguments\n\ndual_stepsize=1/sqrt(8): proximal parameter of the dual prox\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nΛ=missing: the exact operator, that is required if Λ(m)=n does not hold; missing indicates, that the forward operator is exact.\nprimal_stepsize=1/sqrt(8): proximal parameter of the primal prox\nreg_param=1e-5: regularisation parameter for the Newton matrix Note that this changes the arguments the forward_operator is called.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(50): a functor indicating that the stopping criterion is fulfilled\nupdate_primal_base=missing: function to update m (identity by default/missing)\nupdate_dual_base=missing: function to update n (identity by default/missing)\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/primal_dual_semismooth_Newton/#State","page":"Primal-dual Riemannian semismooth Newton","title":"State","text":"","category":"section"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"PrimalDualSemismoothNewtonState","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/#Manopt.PrimalDualSemismoothNewtonState","page":"Primal-dual Riemannian semismooth Newton","title":"Manopt.PrimalDualSemismoothNewtonState","text":"PrimalDualSemismoothNewtonState <: AbstractPrimalDualSolverState\n\nFields\n\nm::P: a point on the manifold mathcal M\nn::Q: a point on the manifold mathcal N\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal M\nprimal_stepsize::Float64: proximal parameter of the primal prox\ndual_stepsize::Float64: proximal parameter of the dual prox\nreg_param::Float64: regularisation parameter for the Newton matrix\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nupdate_primal_base: function to update the primal base\nupdate_dual_base: function to update the dual base\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nwhere for the update functions a AbstractManoptProblem amp, AbstractManoptSolverState ams and the current iterate i are the arguments. If you activate these to be different from the default identity, you have to provide p.Λ for the algorithm to work (which might be missing).\n\nConstructor\n\nPrimalDualSemismoothNewtonState(M::AbstractManifold; kwargs...)\n\nGenerate a state for the primal_dual_semismooth_Newton.\n\nKeyword arguments\n\nm=rand(M)\nn=rand(N)\np=rand(M)\nX=zero_vector(M, p)\nprimal_stepsize=1/sqrt(8)\ndual_stepsize=1/sqrt(8)\nreg_param=1e-5\nupdate_primal_base=(amp, ams, k) -> o.m\nupdate_dual_base=(amp, ams, k) -> o.n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nstopping_criterion=[StopAfterIteration](@ref)(50)`: a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\n\n\n\n\n","category":"type"},{"location":"solvers/primal_dual_semismooth_Newton/#sec-ssn-technical-details","page":"Primal-dual Riemannian semismooth Newton","title":"Technical details","text":"","category":"section"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"The primal_dual_semismooth_Newton solver requires the following functions of a manifold to be available for both the manifold mathcal Mand mathcal N","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= does not have to be specified.\nA `copyto!(M, q, p) and copy(M,p) for points.\nA get_basis for the DefaultOrthonormalBasis on mathcal M\nexp and log (on mathcal M)\nA DiagonalizingOrthonormalBasis to compute the differentials of the exponential and logarithmic map\nTangent vectors storing the social and cognitive vectors are initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/primal_dual_semismooth_Newton/#Literature","page":"Primal-dual Riemannian semismooth Newton","title":"Literature","text":"","category":"section"},{"location":"solvers/primal_dual_semismooth_Newton/","page":"Primal-dual Riemannian semismooth Newton","title":"Primal-dual Riemannian semismooth Newton","text":"W. Diepeveen and J. Lellmann. An Inexact Semismooth Newton Method on Riemannian Manifolds with Application to Duality-Based Total Variation Denoising. SIAM Journal on Imaging Sciences 14, 1565–1600 (2021), arXiv:2102.10309.\n\n\n\n","category":"page"},{"location":"solvers/DouglasRachford/#Douglas—Rachford-algorithm","page":"Douglas—Rachford","title":"Douglas—Rachford algorithm","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"The (Parallel) Douglas—Rachford ((P)DR) algorithm was generalized to Hadamard manifolds in [BPS16].","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"The aim is to minimize the sum","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"f(p) = g(p) + h(p)","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"on a manifold, where the two summands have proximal maps operatornameprox_λ g operatornameprox_λ h that are easy to evaluate (maybe in closed form, or not too costly to approximate). Further, define the reflection operator at the proximal map as","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"operatornamerefl_λ g(p) = operatornameretr_operatornameprox_λ g(p) bigl( -operatornameretr^-1_operatornameprox_λ g(p) p bigr)","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Let alpha_k 01 with sum_k ℕ alpha_k(1-alpha_k) = infty and λ 0 (which might depend on iteration k as well) be given.","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Then the (P)DRA algorithm for initial data p^(0) mathcal M as","category":"page"},{"location":"solvers/DouglasRachford/#Initialization","page":"Douglas—Rachford","title":"Initialization","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Initialize q^(0) = p^(0) and k=0","category":"page"},{"location":"solvers/DouglasRachford/#Iteration","page":"Douglas—Rachford","title":"Iteration","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Repeat until a convergence criterion is reached","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Compute r^(k) = operatornamerefl_λ goperatornamerefl_λ h(q^(k))\nWithin that operation, store p^(k+1) = operatornameprox_λ h(q^(k)) which is the prox the inner reflection reflects at.\nCompute q^(k+1) = g(alpha_k q^(k) r^(k)), where g is a curve approximating the shortest geodesic, provided by a retraction and its inverse\nSet k = k+1","category":"page"},{"location":"solvers/DouglasRachford/#Result","page":"Douglas—Rachford","title":"Result","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"The result is given by the last computed p^(K) at the last iterate K.","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"For the parallel version, the first proximal map is a vectorial version where in each component one prox is applied to the corresponding copy of t_k and the second proximal map corresponds to the indicator function of the set, where all copies are equal (in mathcal M^n, where n is the number of copies), leading to the second prox being the Riemannian mean.","category":"page"},{"location":"solvers/DouglasRachford/#Interface","page":"Douglas—Rachford","title":"Interface","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":" DouglasRachford\n DouglasRachford!","category":"page"},{"location":"solvers/DouglasRachford/#Manopt.DouglasRachford","page":"Douglas—Rachford","title":"Manopt.DouglasRachford","text":"DouglasRachford(M, f, proxes_f, p)\nDouglasRachford(M, mpo, p)\nDouglasRachford!(M, f, proxes_f, p)\nDouglasRachford!(M, mpo, p)\n\nCompute the Douglas-Rachford algorithm on the manifold mathcal M, starting from pgiven the (two) proximal mapsproxes_f`, see [BPS16].\n\nFor k2 proximal maps, the problem is reformulated using the parallel Douglas Rachford: a vectorial proximal map on the power manifold mathcal M^k is introduced as the first proximal map and the second proximal map of the is set to the mean (Riemannian center of mass). This hence also boils down to two proximal maps, though each evaluates proximal maps in parallel, that is, component wise in a vector.\n\nnote: Note\n\n\nThe parallel Douglas Rachford does not work in-place for now, since while creating the new staring point p' on the power manifold, a copy of p Is created\n\nIf you provide a ManifoldProximalMapObjective mpo instead, the proximal maps are kept unchanged.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\nproxes_f: functions of the form (M, λ, p)-> q performing a proximal maps, where ⁠λ denotes the proximal parameter, for each of the summands of F. These can also be given in the InplaceEvaluation variants (M, q, λ p) -> q computing in place of q.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nα= k -> 0.9: relaxation of the step from old to new iterate, to be precise p^(k+1) = g(α_k p^(k) q^(k)), where q^(k) is the result of the double reflection involved in the DR algorithm and g is a curve induced by the retraction and its inverse.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses This is used both in the relaxation step as well as in the reflection, unless you set R yourself.\nλ= k -> 1.0: function to provide the value for the proximal parameter λ_k\nR=reflect(!): method employed in the iteration to perform the reflection of p at the prox of p. This uses by default reflect or reflect! depending on reflection_evaluation and the retraction and inverse retraction specified by retraction_method and inverse_retraction_method, respectively.\nreflection_evaluation: (AllocatingEvaluation whether R works in-place or allocating\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions This is used both in the relaxation step as well as in the reflection, unless you set R yourself.\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-5): a functor indicating that the stopping criterion is fulfilled\nparallel=false: indicate whether to use a parallel Douglas-Rachford or not.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\nDouglasRachford(M, f, proxes_f, p; kwargs...)\n\na doc string with some math t_k+1 = g(α_k t_k s_k)\n\n\n\n\n\n","category":"function"},{"location":"solvers/DouglasRachford/#Manopt.DouglasRachford!","page":"Douglas—Rachford","title":"Manopt.DouglasRachford!","text":"DouglasRachford(M, f, proxes_f, p)\nDouglasRachford(M, mpo, p)\nDouglasRachford!(M, f, proxes_f, p)\nDouglasRachford!(M, mpo, p)\n\nCompute the Douglas-Rachford algorithm on the manifold mathcal M, starting from pgiven the (two) proximal mapsproxes_f`, see [BPS16].\n\nFor k2 proximal maps, the problem is reformulated using the parallel Douglas Rachford: a vectorial proximal map on the power manifold mathcal M^k is introduced as the first proximal map and the second proximal map of the is set to the mean (Riemannian center of mass). This hence also boils down to two proximal maps, though each evaluates proximal maps in parallel, that is, component wise in a vector.\n\nnote: Note\n\n\nThe parallel Douglas Rachford does not work in-place for now, since while creating the new staring point p' on the power manifold, a copy of p Is created\n\nIf you provide a ManifoldProximalMapObjective mpo instead, the proximal maps are kept unchanged.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\nproxes_f: functions of the form (M, λ, p)-> q performing a proximal maps, where ⁠λ denotes the proximal parameter, for each of the summands of F. These can also be given in the InplaceEvaluation variants (M, q, λ p) -> q computing in place of q.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nα= k -> 0.9: relaxation of the step from old to new iterate, to be precise p^(k+1) = g(α_k p^(k) q^(k)), where q^(k) is the result of the double reflection involved in the DR algorithm and g is a curve induced by the retraction and its inverse.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses This is used both in the relaxation step as well as in the reflection, unless you set R yourself.\nλ= k -> 1.0: function to provide the value for the proximal parameter λ_k\nR=reflect(!): method employed in the iteration to perform the reflection of p at the prox of p. This uses by default reflect or reflect! depending on reflection_evaluation and the retraction and inverse retraction specified by retraction_method and inverse_retraction_method, respectively.\nreflection_evaluation: (AllocatingEvaluation whether R works in-place or allocating\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions This is used both in the relaxation step as well as in the reflection, unless you set R yourself.\nstopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-5): a functor indicating that the stopping criterion is fulfilled\nparallel=false: indicate whether to use a parallel Douglas-Rachford or not.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/DouglasRachford/#State","page":"Douglas—Rachford","title":"State","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":" DouglasRachfordState","category":"page"},{"location":"solvers/DouglasRachford/#Manopt.DouglasRachfordState","page":"Douglas—Rachford","title":"Manopt.DouglasRachfordState","text":"DouglasRachfordState <: AbstractManoptSolverState\n\nStore all options required for the DouglasRachford algorithm,\n\nFields\n\nα: relaxation of the step from old to new iterate, to be precise x^(k+1) = g(α(k) x^(k) t^(k)), where t^(k) is the result of the double reflection involved in the DR algorithm\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nλ: function to provide the value for the proximal parameter during the calls\nparallel: indicate whether to use a parallel Douglas-Rachford or not.\nR: method employed in the iteration to perform the reflection of x at the prox p.\np::P: a point on the manifold mathcal Mstoring the current iterate For the parallel Douglas-Rachford, this is not a value from the PowerManifold manifold but the mean.\nreflection_evaluation: whether R works in-place or allocating\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\ns: the last result of the double reflection at the proximal maps relaxed by α.\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\n\nConstructor\n\nDouglasRachfordState(M::AbstractManifold; kwargs...)\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\n\nKeyword arguments\n\nα= k -> 0.9: relaxation of the step from old to new iterate, to be precise x^(k+1) = g(α(k) x^(k) t^(k)), where t^(k) is the result of the double reflection involved in the DR algorithm\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nλ= k -> 1.0: function to provide the value for the proximal parameter during the calls\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nR=reflect(!): method employed in the iteration to perform the reflection of p at the prox of p, which function is used depends on reflection_evaluation.\nreflection_evaluation=AllocatingEvaluation()) specify whether the reflection works in-place or allocating (default)\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(300): a functor indicating that the stopping criterion is fulfilled\nparallel=false: indicate whether to use a parallel Douglas-Rachford or not.\n\n\n\n\n\n","category":"type"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"For specific DebugActions and RecordActions see also Cyclic Proximal Point.","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"Furthermore, this solver has a short hand notation for the involved reflection.","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"reflect","category":"page"},{"location":"solvers/DouglasRachford/#Manopt.reflect","page":"Douglas—Rachford","title":"Manopt.reflect","text":"reflect(M, f, x; kwargs...)\nreflect!(M, q, f, x; kwargs...)\n\nreflect the point x from the manifold M at the point f(x) of the function f mathcal M mathcal M, given by\n\n operatornamerefl_f(x) = operatornamerefl_f(x)(x)\n\nCompute the result in q.\n\nsee also reflect(M,p,x), to which the keywords are also passed to.\n\n\n\n\n\nreflect(M, p, x, kwargs...)\nreflect!(M, q, p, x, kwargs...)\n\nReflect the point x from the manifold M at point p, given by\n\noperatornamerefl\n\nwhere operatornameretr and operatornameretr^-1 denote a retraction and an inverse retraction, respectively. This can also be done in place of q.\n\nKeyword arguments\n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\nand for the reflect! additionally\n\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M as temporary memory to compute the inverse retraction in place. otherwise this is the memory that would be allocated anyways.\n\n\n\n\n\nreflect(M, f, x; kwargs...)\nreflect!(M, q, f, x; kwargs...)\n\nreflect the point x from the manifold M at the point f(x) of the function f mathcal M mathcal M, given by\n\n operatornamerefl_f(x) = operatornamerefl_f(x)(x)\n\nCompute the result in q.\n\nsee also reflect(M,p,x), to which the keywords are also passed to.\n\n\n\n\n\nreflect(M, p, x, kwargs...)\nreflect!(M, q, p, x, kwargs...)\n\nReflect the point x from the manifold M at point p, given by\n\noperatornamerefl_p(q) = operatornameretr_p(-operatornameretr^-1_p q)\n\nwhere operatornameretr and operatornameretr^-1 denote a retraction and an inverse retraction, respectively.\n\nThis can also be done in place of q.\n\nKeyword arguments\n\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\n\nand for the reflect! additionally\n\nX=zero_vector(M,p): a temporary memory to compute the inverse retraction in place. otherwise this is the memory that would be allocated anyways.\n\n\n\n\n\n","category":"function"},{"location":"solvers/DouglasRachford/#sec-dr-technical-details","page":"Douglas—Rachford","title":"Technical details","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"The DouglasRachford solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= does not have to be specified.\nA `copyto!(M, q, p) and copy(M,p) for points.","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"By default, one of the stopping criteria is StopWhenChangeLess, which requires","category":"page"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified or the distance(M, p, q) for said default inverse retraction.","category":"page"},{"location":"solvers/DouglasRachford/#Literature","page":"Douglas—Rachford","title":"Literature","text":"","category":"section"},{"location":"solvers/DouglasRachford/","page":"Douglas—Rachford","title":"Douglas—Rachford","text":"","category":"page"},{"location":"tutorials/CountAndCache/#How-to-count-and-cache-function-calls","page":"Count and use a cache","title":"How to count and cache function calls","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"In this tutorial, we want to investigate the caching and counting (statistics) features of Manopt.jl. We reuse the optimization tasks from the introductory tutorial Get started: optimize!.","category":"page"},{"location":"tutorials/CountAndCache/#Introduction","page":"Count and use a cache","title":"Introduction","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"There are surely many ways to keep track for example of how often the cost function is called, for example with a functor, as we used in an example in How to Record Data","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"mutable struct MyCost{I<:Integer}\n count::I\nend\nMyCost() = MyCost{Int64}(0)\nfunction (c::MyCost)(M, x)\n c.count += 1\n # [ .. Actual implementation of the cost here ]\nend","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"This still leaves a bit of work to the user, especially for tracking more than just the number of cost function evaluations.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"When a function like the objective or gradient is expensive to compute, it may make sense to cache its results. Manopt.jl tries to minimize the number of repeated calls but sometimes they are necessary and harmless when the function is cheap to compute. Caching of expensive function calls can for example be added using Memoize.jl by the user. The approach in the solvers of Manopt.jl aims to simplify adding both these capabilities on the level of calling a solver.","category":"page"},{"location":"tutorials/CountAndCache/#Technical-background","page":"Count and use a cache","title":"Technical background","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"The two ingredients for a solver in Manopt.jl are the AbstractManoptProblem and the AbstractManoptSolverState, where the former consists of the domain, that is the AsbtractManifold and AbstractManifoldObjective.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Both recording and debug capabilities are implemented in a decorator pattern to the solver state. They can be easily added using the record= and debug= in any solver call. This pattern was recently extended, such that also the objective can be decorated. This is how both caching and counting are implemented, as decorators of the AbstractManifoldObjective and hence for example changing/extending the behaviour of a call to get_cost.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Let’s finish off the technical background by loading the necessary packages. Besides Manopt.jl and Manifolds.jl we also need LRUCaches.jl which are (since Julia 1.9) a weak dependency and provide the least recently used strategy for our caches.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"using Manopt, Manifolds, Random, LRUCache, LinearAlgebra, ManifoldDiff\nusing ManifoldDiff: grad_distance","category":"page"},{"location":"tutorials/CountAndCache/#Counting","page":"Count and use a cache","title":"Counting","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"We first define our task, the Riemannian Center of Mass from the Get started: optimize! tutorial.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"n = 100\nσ = π / 8\nM = Sphere(2)\np = 1 / sqrt(2) * [1.0, 0.0, 1.0]\nRandom.seed!(42)\ndata = [exp(M, p, σ * rand(M; vector_at=p)) for i in 1:n];\nf(M, p) = sum(1 / (2 * n) * distance.(Ref(M), Ref(p), data) .^ 2)\ngrad_f(M, p) = sum(1 / n * grad_distance.(Ref(M), data, Ref(p)));","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"to now count how often the cost and the gradient are called, we use the count= keyword argument that works in any solver to specify the elements of the objective whose calls we want to count calls to. A full list is available in the documentation of the AbstractManifoldObjective. To also see the result, we have to set return_objective=true. This returns (objective, p) instead of just the solver result p. We can further also set return_state=true to get even more information about the solver run.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"gradient_descent(M, f, grad_f, data[1]; count=[:Cost, :Gradient], return_objective=true, return_state=true)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 66 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-8: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Statistics on function calls\n * :Gradient : 199\n * :Cost : 275","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"And we see that statistics are shown in the end.","category":"page"},{"location":"tutorials/CountAndCache/#Caching","page":"Count and use a cache","title":"Caching","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"To now also cache these calls, we can use the cache= keyword argument. Since now both the cache and the count “extend” the capability of the objective, the order is important: on the high-level interface, the count is treated first, which means that only actual function calls and not cache look-ups are counted. With the proper initialisation, you can use any caches here that support the get!(function, cache, key)! update. All parts of the objective that can currently be cached are listed at ManifoldCachedObjective. The solver call has a keyword cache that takes a tuple(c, vs, n) of three arguments, where c is a symbol for the type of cache, vs is a vector of symbols, which calls to cache and n is the size of the cache. If the last element is not provided, a suitable default (currentlyn=10) is used.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Here we want to use c=:LRU caches for vs=[Cost, :Gradient] with a size of n=25.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"r = gradient_descent(M, f, grad_f, data[1];\n count=[:Cost, :Gradient],\n cache=(:LRU, [:Cost, :Gradient], 25),\n return_objective=true, return_state=true)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 66 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-8: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Cache\n * :Cost : 25/25 entries of type Float64 used\n * :Gradient : 25/25 entries of type Vector{Float64} used\n\n## Statistics on function calls\n * :Gradient : 66\n * :Cost : 149","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Since the default setup with ArmijoLinesearch needs the gradient and the cost, and similarly the stopping criterion might (independently) evaluate the gradient, the caching is quite helpful here.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"And of course also for this advanced return value of the solver, we can still access the result as usual:","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"get_solver_result(r)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"3-element Vector{Float64}:\n 0.6868392807355564\n 0.006531599748261925\n 0.7267799809043942","category":"page"},{"location":"tutorials/CountAndCache/#Advanced-caching-examples","page":"Count and use a cache","title":"Advanced caching examples","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"There are more options other than caching single calls to specific parts of the objective. For example you may want to cache intermediate results of computing the cost and share that with the gradient computation. We present three solutions to this:","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"An easy approach from within Manopt.jl: the ManifoldCostGradientObjective\nA shared storage approach using a functor\nA shared (internal) cache approach also using a functor","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"For that we switch to another example: the Rayleigh quotient. We aim to maximize the Rayleigh quotient displaystylefracx^mathrmTAxx^mathrmTx, for some Aℝ^m+1times m+1 and xℝ^m+1 but since we consider this on the sphere and Manopt.jl (as many other optimization toolboxes) minimizes, we consider","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"g(p) = -p^mathrmTApqquad pmathbb S^m","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"The Euclidean gradient (that is in $ R^{m+1}$) is actually just nabla g(p) = -2Ap, the Riemannian gradient the projection of nabla g(p) onto the tangent space T_pmathbb S^m.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"m = 25\nRandom.seed!(42)\nA = randn(m + 1, m + 1)\nA = Symmetric(A)\np_star = eigvecs(A)[:, end] # minimizer (or similarly -p)\nf_star = -eigvals(A)[end] # cost (note that we get - the largest Eigenvalue)\n\nN = Sphere(m);\n\ng(M, p) = -p' * A*p\n∇g(p) = -2 * A * p\ngrad_g(M,p) = project(M, p, ∇g(p))\ngrad_g!(M,X, p) = project!(M, X, p, ∇g(p))","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"grad_g! (generic function with 1 method)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"But since both the cost and the gradient require the computation of the matrix-vector product Ap, it might be beneficial to only compute this once.","category":"page"},{"location":"tutorials/CountAndCache/#The-[ManifoldCostGradientObjective](@ref)-approach","page":"Count and use a cache","title":"The ManifoldCostGradientObjective approach","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"The ManifoldCostGradientObjective uses a combined function to compute both the gradient and the cost at the same time. We define the in-place variant as","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"function g_grad_g!(M::AbstractManifold, X, p)\n X .= -A*p\n c = p'*X\n X .*= 2\n project!(M, X, p, X)\n return (c, X)\nend","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"g_grad_g! (generic function with 1 method)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"where we only compute the matrix-vector product once. The small disadvantage might be, that we always compute both, the gradient and the cost. Luckily, the cache we used before, takes this into account and caches both results, such that we indeed end up computing A*p only once when asking to a cost and a gradient.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Let’s compare both methods","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"p0 = [(1/5 .* ones(5))..., zeros(m-4)...];\n@time s1 = gradient_descent(N, g, grad_g!, p0;\n stopping_criterion = StopWhenGradientNormLess(1e-5),\n evaluation=InplaceEvaluation(),\n count=[:Cost, :Gradient],\n cache=(:LRU, [:Cost, :Gradient], 25),\n return_objective=true,\n)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":" 1.343181 seconds (2.39 M allocations: 121.701 MiB, 1.51% gc time, 99.65% compilation time)\n\n## Cache\n * :Cost : 25/25 entries of type Float64 used\n * :Gradient : 25/25 entries of type Vector{Float64} used\n\n## Statistics on function calls\n * :Gradient : 602\n * :Cost : 1449\n\nTo access the solver result, call `get_solver_result` on this variable.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"versus","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"obj = ManifoldCostGradientObjective(g_grad_g!; evaluation=InplaceEvaluation())\n@time s2 = gradient_descent(N, obj, p0;\n stopping_criterion=StopWhenGradientNormLess(1e-5),\n count=[:Cost, :Gradient],\n cache=(:LRU, [:Cost, :Gradient], 25),\n return_objective=true,\n)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":" 0.790997 seconds (1.22 M allocations: 70.148 MiB, 2.43% gc time, 98.67% compilation time)\n\n## Cache\n * :Cost : 25/25 entries of type Float64 used\n * :Gradient : 25/25 entries of type Vector{Float64} used\n\n## Statistics on function calls\n * :Gradient : 1448\n * :Cost : 1448\n\nTo access the solver result, call `get_solver_result` on this variable.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"first of all both yield the same result","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"p1 = get_solver_result(s1)\np2 = get_solver_result(s2)\n[distance(N, p1, p2), g(N, p1), g(N, p2), f_star]","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"4-element Vector{Float64}:\n 0.0\n -7.8032957637779\n -7.8032957637779\n -7.803295763793949","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"and we can see that the combined number of evaluations is once 2051, once just the number of cost evaluations 1449. Note that the involved additional 847 gradient evaluations are merely a multiplication with 2. On the other hand, the additional caching of the gradient in these cases might be less beneficial. It is beneficial, when the gradient and the cost are very often required together.","category":"page"},{"location":"tutorials/CountAndCache/#A-shared-storage-approach-using-a-functor","page":"Count and use a cache","title":"A shared storage approach using a functor","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"An alternative to the previous approach is the usage of a functor that introduces a “shared storage” of the result of computing A*p. We additionally have to store p though, since we have to make sure that we are still evaluating the cost and/or gradient at the same point at which the cached A*p was computed. We again consider the (more efficient) in-place variant. This can be done as follows","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"struct StorageG{T,M}\n A::M\n Ap::T\n p::T\nend\nfunction (g::StorageG)(::Val{:Cost}, M::AbstractManifold, p)\n if !(p==g.p) #We are at a new point -> Update\n g.Ap .= g.A*p\n g.p .= p\n end\n return -g.p'*g.Ap\nend\nfunction (g::StorageG)(::Val{:Gradient}, M::AbstractManifold, X, p)\n if !(p==g.p) #We are at a new point -> Update\n g.Ap .= g.A*p\n g.p .= p\n end\n X .= -2 .* g.Ap\n project!(M, X, p, X)\n return X\nend","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Here we use the first parameter to distinguish both functions. For the mutating case the signatures are different regardless of the additional argument but for the allocating case, the signatures of the cost and the gradient function are the same.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"#Define the new functor\nstorage_g = StorageG(A, zero(p0), zero(p0))\n# and cost and gradient that use this functor as\ng3(M,p) = storage_g(Val(:Cost), M, p)\ngrad_g3!(M, X, p) = storage_g(Val(:Gradient), M, X, p)\n@time s3 = gradient_descent(N, g3, grad_g3!, p0;\n stopping_criterion = StopWhenGradientNormLess(1e-5),\n evaluation=InplaceEvaluation(),\n count=[:Cost, :Gradient],\n cache=(:LRU, [:Cost, :Gradient], 2),\n return_objective=true#, return_state=true\n)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":" 0.579056 seconds (559.15 k allocations: 29.645 MiB, 99.24% compilation time)\n\n## Cache\n * :Cost : 2/2 entries of type Float64 used\n * :Gradient : 2/2 entries of type Vector{Float64} used\n\n## Statistics on function calls\n * :Gradient : 602\n * :Cost : 1449\n\nTo access the solver result, call `get_solver_result` on this variable.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"This of course still yields the same result","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"p3 = get_solver_result(s3)\ng(N, p3) - f_star","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"1.6049384043981263e-11","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"And while we again have a split off the cost and gradient evaluations, we can observe that the allocations are less than half of the previous approach.","category":"page"},{"location":"tutorials/CountAndCache/#A-local-cache-approach","page":"Count and use a cache","title":"A local cache approach","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"This variant is very similar to the previous one, but uses a whole cache instead of just one place to store A*p. This makes the code a bit nicer, and it is possible to store more than just the last p either cost or gradient was called with.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"struct CacheG{C,M}\n A::M\n cache::C\nend\nfunction (g::CacheG)(::Val{:Cost}, M, p)\n Ap = get!(g.cache, copy(M,p)) do\n g.A*p\n end\n return -p'*Ap\nend\nfunction (g::CacheG)(::Val{:Gradient}, M, X, p)\n Ap = get!(g.cache, copy(M,p)) do\n g.A*p\n end\n X .= -2 .* Ap\n project!(M, X, p, X)\n return X\nend","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"However, the resulting solver run is not always faster, since the whole cache instead of storing just Ap and p is a bit more costly. Then the tradeoff is, whether this pays off.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"#Define the new functor\ncache_g = CacheG(A, LRU{typeof(p0),typeof(p0)}(; maxsize=25))\n# and cost and gradient that use this functor as\ng4(M,p) = cache_g(Val(:Cost), M, p)\ngrad_g4!(M, X, p) = cache_g(Val(:Gradient), M, X, p)\n@time s4 = gradient_descent(N, g4, grad_g4!, p0;\n stopping_criterion = StopWhenGradientNormLess(1e-5),\n evaluation=InplaceEvaluation(),\n count=[:Cost, :Gradient],\n cache=(:LRU, [:Cost, :Gradient], 25),\n return_objective=true,\n)","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":" 0.518644 seconds (519.16 k allocations: 27.893 MiB, 3.48% gc time, 99.00% compilation time)\n\n## Cache\n * :Cost : 25/25 entries of type Float64 used\n * :Gradient : 25/25 entries of type Vector{Float64} used\n\n## Statistics on function calls\n * :Gradient : 602\n * :Cost : 1449\n\nTo access the solver result, call `get_solver_result` on this variable.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"and for safety let’s verify that we are reasonably close","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"p4 = get_solver_result(s4)\ng(N, p4) - f_star","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"1.6049384043981263e-11","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"For this example, or maybe even gradient_descent in general it seems, this additional (second, inner) cache does not improve the result further, it is about the same effort both time and allocation-wise.","category":"page"},{"location":"tutorials/CountAndCache/#Summary","page":"Count and use a cache","title":"Summary","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"While the second approach of ManifoldCostGradientObjective is very easy to implement, both the storage and the (local) cache approach are more efficient. All three are an improvement over the first implementation without sharing interim results. The results with storage or cache have further advantage of being more flexible, since the stored information could also be reused in a third function, for example when also computing the Hessian.","category":"page"},{"location":"tutorials/CountAndCache/#Technical-details","page":"Count and use a cache","title":"Technical details","text":"","category":"section"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/CountAndCache/","page":"Count and use a cache","title":"Count and use a cache","text":"2024-11-21T20:36:59.676","category":"page"},{"location":"tutorials/InplaceGradient/#Speedup-using-in-place-evaluation","page":"Speedup using in-place computations","title":"Speedup using in-place evaluation","text":"","category":"section"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"When it comes to time critical operations, a main ingredient in Julia is given by mutating functions, that is those that compute in place without additional memory allocations. In the following, we illustrate how to do this with Manopt.jl.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"Let’s start with the same function as in Get started: optimize! and compute the mean of some points, only that here we use the sphere mathbb S^30 and n=800 points.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"From the aforementioned example.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"We first load all necessary packages.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"using Manopt, Manifolds, Random, BenchmarkTools\nusing ManifoldDiff: grad_distance, grad_distance!\nRandom.seed!(42);","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"And setup our data","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"Random.seed!(42)\nm = 30\nM = Sphere(m)\nn = 800\nσ = π / 8\np = zeros(Float64, m + 1)\np[2] = 1.0\ndata = [exp(M, p, σ * rand(M; vector_at=p)) for i in 1:n];","category":"page"},{"location":"tutorials/InplaceGradient/#Classical-Definition","page":"Speedup using in-place computations","title":"Classical Definition","text":"","category":"section"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"The variant from the previous tutorial defines a cost f(x) and its gradient operatornamegradf(p) ““”","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"f(M, p) = sum(1 / (2 * n) * distance.(Ref(M), Ref(p), data) .^ 2)\ngrad_f(M, p) = sum(1 / n * grad_distance.(Ref(M), data, Ref(p)))","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"grad_f (generic function with 1 method)","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"We further set the stopping criterion to be a little more strict. Then we obtain","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"sc = StopWhenGradientNormLess(3e-10)\np0 = zeros(Float64, m + 1); p0[1] = 1/sqrt(2); p0[2] = 1/sqrt(2)\nm1 = gradient_descent(M, f, grad_f, p0; stopping_criterion=sc);","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"We can also benchmark this as","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"@benchmark gradient_descent($M, $f, $grad_f, $p0; stopping_criterion=$sc)","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"BenchmarkTools.Trial: 106 samples with 1 evaluation.\n Range (min … max): 46.774 ms … 50.326 ms ┊ GC (min … max): 2.31% … 2.47%\n Time (median): 47.207 ms ┊ GC (median): 2.45%\n Time (mean ± σ): 47.364 ms ± 608.514 μs ┊ GC (mean ± σ): 2.53% ± 0.25%\n\n ▄▇▅▇█▄▇ \n ▅▇▆████████▇▇▅▅▃▁▆▁▁▁▅▁▁▅▁▃▃▁▁▁▁▁▁▁▁▁▁▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅ ▃\n 46.8 ms Histogram: frequency by time 50.2 ms <\n\n Memory estimate: 182.50 MiB, allocs estimate: 615822.","category":"page"},{"location":"tutorials/InplaceGradient/#In-place-Computation-of-the-Gradient","page":"Speedup using in-place computations","title":"In-place Computation of the Gradient","text":"","category":"section"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"We can reduce the memory allocations by implementing the gradient to be evaluated in-place. We do this by using a functor. The motivation is twofold: on one hand, we want to avoid variables from the global scope, for example the manifold M or the data, being used within the function. Considering to do the same for more complicated cost functions might also be worth pursuing.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"Here, we store the data (as reference) and one introduce temporary memory in order to avoid reallocation of memory per grad_distance computation. We get","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"struct GradF!{TD,TTMP}\n data::TD\n tmp::TTMP\nend\nfunction (grad_f!::GradF!)(M, X, p)\n fill!(X, 0)\n for di in grad_f!.data\n grad_distance!(M, grad_f!.tmp, di, p)\n X .+= grad_f!.tmp\n end\n X ./= length(grad_f!.data)\n return X\nend","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"For the actual call to the solver, we first have to generate an instance of GradF! and tell the solver, that the gradient is provided in an InplaceEvaluation. We can further also use gradient_descent! to even work in-place of the initial point we pass.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"grad_f2! = GradF!(data, similar(data[1]))\nm2 = deepcopy(p0)\ngradient_descent!(\n M, f, grad_f2!, m2; evaluation=InplaceEvaluation(), stopping_criterion=sc\n);","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"We can again benchmark this","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"@benchmark gradient_descent!(\n $M, $f, $grad_f2!, m2; evaluation=$(InplaceEvaluation()), stopping_criterion=$sc\n) setup = (m2 = deepcopy($p0))","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"BenchmarkTools.Trial: 176 samples with 1 evaluation.\n Range (min … max): 27.358 ms … 84.206 ms ┊ GC (min … max): 0.00% … 0.00%\n Time (median): 27.768 ms ┊ GC (median): 0.00%\n Time (mean ± σ): 28.504 ms ± 4.338 ms ┊ GC (mean ± σ): 0.60% ± 1.96%\n\n ▂█▇▂ ▂ \n ▆▇████▆█▆▆▄▄▃▄▄▃▃▃▁▃▃▃▃▃▃▃▃▃▄▃▃▃▃▃▃▁▃▁▁▃▁▁▁▁▁▁▃▃▁▁▃▃▁▁▁▁▃▃▃ ▃\n 27.4 ms Histogram: frequency by time 31.4 ms <\n\n Memory estimate: 3.83 MiB, allocs estimate: 5797.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"which is faster by about a factor of 2 compared to the first solver-call. Note that the results m1 and m2 are of course the same.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"distance(M, m1, m2)","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"2.4669338186126805e-17","category":"page"},{"location":"tutorials/InplaceGradient/#Technical-details","page":"Speedup using in-place computations","title":"Technical details","text":"","category":"section"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"Status `~/Repositories/Julia/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.108\n [26cc04aa] FiniteDifferences v0.12.31\n [7073ff75] IJulia v1.24.2\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.10\n [1cead3c2] Manifolds v0.9.18\n [3362f125] ManifoldsBase v0.15.10\n [0fc0a36d] Manopt v0.4.63 `..`\n [91a5bcdd] Plots v1.40.4","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/InplaceGradient/","page":"Speedup using in-place computations","title":"Speedup using in-place computations","text":"2024-05-26T13:52:05.613","category":"page"},{"location":"plans/state/#sec-solver-state","page":"Solver State","title":"Solver state","text":"","category":"section"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"Given an AbstractManoptProblem, that is a certain optimisation task, the state specifies the solver to use. It contains the parameters of a solver and all fields necessary during the algorithm, for example the current iterate, a StoppingCriterion or a Stepsize.","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"AbstractManoptSolverState\nget_state\nManopt.get_count","category":"page"},{"location":"plans/state/#Manopt.AbstractManoptSolverState","page":"Solver State","title":"Manopt.AbstractManoptSolverState","text":"AbstractManoptSolverState\n\nA general super type for all solver states.\n\nFields\n\nThe following fields are assumed to be default. If you use different ones, adapt the the access functions get_iterate and get_stopping_criterion accordingly\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.get_state","page":"Solver State","title":"Manopt.get_state","text":"get_state(s::AbstractManoptSolverState, recursive::Bool=true)\n\nreturn the (one step) undecorated AbstractManoptSolverState of the (possibly) decorated s. As long as your decorated state stores the state within s.state and the dispatch_objective_decorator is set to Val{true}, the internal state are extracted automatically.\n\nBy default the state that is stored within a decorated state is assumed to be at s.state. Overwrite _get_state(s, ::Val{true}, recursive) to change this behaviour for your states` for both the recursive and the direct case.\n\nIf recursive is set to false, only the most outer decorator is taken away instead of all.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.get_count","page":"Solver State","title":"Manopt.get_count","text":"get_count(ams::AbstractManoptSolverState, ::Symbol)\n\nObtain the count for a certain countable size, for example the :Iterations. This function returns 0 if there was nothing to count\n\nAvailable symbols from within the solver state\n\n:Iterations is passed on to the stop field to obtain the iteration at which the solver stopped.\n\n\n\n\n\nget_count(co::ManifoldCountObjective, s::Symbol, mode::Symbol=:None)\n\nGet the number of counts for a certain symbol s.\n\nDepending on the mode different results appear if the symbol does not exist in the dictionary\n\n:None: (default) silent mode, returns -1 for non-existing entries\n:warn: issues a warning if a field does not exist\n:error: issues an error if a field does not exist\n\n\n\n\n\n","category":"function"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"Since every subtype of an AbstractManoptSolverState directly relate to a solver, the concrete states are documented together with the corresponding solvers. This page documents the general features available for every state.","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"A first example is to obtain or set, the current iterate. This might be useful to continue investigation at the current iterate, or to set up a solver for a next experiment, respectively.","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"get_iterate\nset_iterate!\nget_gradient(s::AbstractManoptSolverState)\nset_gradient!","category":"page"},{"location":"plans/state/#Manopt.get_iterate","page":"Solver State","title":"Manopt.get_iterate","text":"get_iterate(O::AbstractManoptSolverState)\n\nreturn the (last stored) iterate within AbstractManoptSolverStates`. This should usually refer to a single point on the manifold the solver is working on\n\nBy default this also removes all decorators of the state beforehand.\n\n\n\n\n\nget_iterate(agst::AbstractGradientSolverState)\n\nreturn the iterate stored within gradient options. THe default returns agst.p.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.set_iterate!","page":"Solver State","title":"Manopt.set_iterate!","text":"set_iterate!(s::AbstractManoptSolverState, M::AbstractManifold, p)\n\nset the iterate within an AbstractManoptSolverState to some (start) value p.\n\n\n\n\n\nset_iterate!(agst::AbstractGradientSolverState, M, p)\n\nset the (current) iterate stored within an AbstractGradientSolverState to p. The default function modifies s.p.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.get_gradient-Tuple{AbstractManoptSolverState}","page":"Solver State","title":"Manopt.get_gradient","text":"get_gradient(s::AbstractManoptSolverState)\n\nreturn the (last stored) gradient within AbstractManoptSolverStates`. By default also undecorates the state beforehand\n\n\n\n\n\n","category":"method"},{"location":"plans/state/#Manopt.set_gradient!","page":"Solver State","title":"Manopt.set_gradient!","text":"set_gradient!(s::AbstractManoptSolverState, M::AbstractManifold, p, X)\n\nset the gradient within an (possibly decorated) AbstractManoptSolverState to some (start) value X in the tangent space at p.\n\n\n\n\n\nset_gradient!(agst::AbstractGradientSolverState, M, p, X)\n\nset the (current) gradient stored within an AbstractGradientSolverState to X. The default function modifies s.X.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"An internal function working on the state and elements within a state is used to pass messages from (sub) activities of a state to the corresponding DebugMessages","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"get_message","category":"page"},{"location":"plans/state/#Manopt.get_message","page":"Solver State","title":"Manopt.get_message","text":"get_message(du::AbstractManoptSolverState)\n\nget a message (String) from internal functors, in a summary. This should return any message a sub-step might have issued as well.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"Furthermore, to access the stopping criterion use","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"get_stopping_criterion","category":"page"},{"location":"plans/state/#Manopt.get_stopping_criterion","page":"Solver State","title":"Manopt.get_stopping_criterion","text":"get_stopping_criterion(ams::AbstractManoptSolverState)\n\nReturn the StoppingCriterion stored within the AbstractManoptSolverState ams.\n\nFor an undecorated state, this is assumed to be in ams.stop. Overwrite _get_stopping_criterion(yms::YMS) to change this for your manopt solver (yms) assuming it has type YMS`.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Decorators-for-AbstractManoptSolverStates","page":"Solver State","title":"Decorators for AbstractManoptSolverStates","text":"","category":"section"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"A solver state can be decorated using the following trait and function to initialize","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"dispatch_state_decorator\nis_state_decorator\ndecorate_state!","category":"page"},{"location":"plans/state/#Manopt.dispatch_state_decorator","page":"Solver State","title":"Manopt.dispatch_state_decorator","text":"dispatch_state_decorator(s::AbstractManoptSolverState)\n\nIndicate internally, whether an AbstractManoptSolverState s is of decorating type, and stores (encapsulates) a state in itself, by default in the field s.state.\n\nDecorators indicate this by returning Val{true} for further dispatch.\n\nThe default is Val{false}, so by default a state is not decorated.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.is_state_decorator","page":"Solver State","title":"Manopt.is_state_decorator","text":"is_state_decorator(s::AbstractManoptSolverState)\n\nIndicate, whether AbstractManoptSolverState s are of decorator type.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.decorate_state!","page":"Solver State","title":"Manopt.decorate_state!","text":"decorate_state!(s::AbstractManoptSolverState)\n\ndecorate the AbstractManoptSolverStates with specific decorators.\n\nOptional arguments\n\noptional arguments provide necessary details on the decorators.\n\ndebug=Array{Union{Symbol,DebugAction,String,Int},1}(): a set of symbols representing DebugActions, Strings used as dividers and a sub-sampling integer. These are passed as a DebugGroup within :Iteration to the DebugSolverState decorator dictionary. Only exception is :Stop that is passed to :Stop.\nrecord=Array{Union{Symbol,RecordAction,Int},1}(): specify recordings by using Symbols or RecordActions directly. An integer can again be used for only recording every ith iteration.\nreturn_state=false: indicate whether to wrap the options in a ReturnSolverState, indicating that the solver should return options and not (only) the minimizer.\n\nother keywords are ignored.\n\nSee also\n\nDebugSolverState, RecordSolverState, ReturnSolverState\n\n\n\n\n\n","category":"function"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"A simple example is the","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"ReturnSolverState","category":"page"},{"location":"plans/state/#Manopt.ReturnSolverState","page":"Solver State","title":"Manopt.ReturnSolverState","text":"ReturnSolverState{O<:AbstractManoptSolverState} <: AbstractManoptSolverState\n\nThis internal type is used to indicate that the contained AbstractManoptSolverState state should be returned at the end of a solver instead of the usual minimizer.\n\nSee also\n\nget_solver_result\n\n\n\n\n\n","category":"type"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"as well as DebugSolverState and RecordSolverState.","category":"page"},{"location":"plans/state/#State-actions","page":"Solver State","title":"State actions","text":"","category":"section"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"A state action is a struct for callback functions that can be attached within for example the just mentioned debug decorator or the record decorator.","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"AbstractStateAction","category":"page"},{"location":"plans/state/#Manopt.AbstractStateAction","page":"Solver State","title":"Manopt.AbstractStateAction","text":"AbstractStateAction\n\na common Type for AbstractStateActions that might be triggered in decorators, for example within the DebugSolverState or within the RecordSolverState.\n\n\n\n\n\n","category":"type"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"Several state decorators or actions might store intermediate values like the (last) iterate to compute some change or the last gradient. In order to minimise the storage of these, there is a generic StoreStateAction that acts as generic common storage that can be shared among different actions.","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"StoreStateAction\nget_storage\nhas_storage\nupdate_storage!\nPointStorageKey\nVectorStorageKey","category":"page"},{"location":"plans/state/#Manopt.StoreStateAction","page":"Solver State","title":"Manopt.StoreStateAction","text":"StoreStateAction <: AbstractStateAction\n\ninternal storage for AbstractStateActions to store a tuple of fields from an AbstractManoptSolverStates\n\nThis functor possesses the usual interface of functions called during an iteration and acts on (p, s, k), where p is a AbstractManoptProblem, s is an AbstractManoptSolverState and k is the current iteration.\n\nFields\n\nvalues: a dictionary to store interim values based on certain Symbols\nkeys: a Vector of Symbols to refer to fields of AbstractManoptSolverState\npoint_values: a NamedTuple of mutable values of points on a manifold to be stored in StoreStateAction. Manifold is later determined by AbstractManoptProblem passed to update_storage!.\npoint_init: a NamedTuple of boolean values indicating whether a point in point_values with matching key has been already initialized to a value. When it is false, it corresponds to a general value not being stored for the key present in the vector keys.\nvector_values: a NamedTuple of mutable values of tangent vectors on a manifold to be stored in StoreStateAction. Manifold is later determined by AbstractManoptProblem passed to update_storage!. It is not specified at which point the vectors are tangent but for storage it should not matter.\nvector_init: a NamedTuple of boolean values indicating whether a tangent vector in vector_values: with matching key has been already initialized to a value. When it is false, it corresponds to a general value not being stored for the key present in the vector keys.\nonce: whether to update the internal values only once per iteration\nlastStored: last iterate, where this AbstractStateAction was called (to determine once)\n\nTo handle the general storage, use get_storage and has_storage with keys as Symbols. For the point storage use PointStorageKey. For tangent vector storage use VectorStorageKey. Point and tangent storage have been optimized to be more efficient.\n\nConstructors\n\nStoreStateAction(s::Vector{Symbol})\n\nThis is equivalent as providing s to the keyword store_fields, just that here, no manifold is necessity for the construction.\n\nStoreStateAction(M)\n\nKeyword arguments\n\nstore_fields (Symbol[])\nstore_points (Symbol[])\nstore_vectors (Symbol[])\n\nas vectors of symbols each referring to fields of the state (lower case symbols) or semantic ones (upper case).\n\np_init (rand(M)) but making sure this is not a number but a (mutatable) array\nX_init (zero_vector(M, p_init))\n\nare used to initialize the point and vector storage, change these if you use other types (than the default) for your points/vectors on M.\n\nonce (true) whether to update internal storage only once per iteration or on every update call\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.get_storage","page":"Solver State","title":"Manopt.get_storage","text":"get_storage(a::AbstractStateAction, key::Symbol)\n\nReturn the internal value of the AbstractStateAction a at the Symbol key.\n\n\n\n\n\nget_storage(a::AbstractStateAction, ::PointStorageKey{key}) where {key}\n\nReturn the internal value of the AbstractStateAction a at the Symbol key that represents a point.\n\n\n\n\n\nget_storage(a::AbstractStateAction, ::VectorStorageKey{key}) where {key}\n\nReturn the internal value of the AbstractStateAction a at the Symbol key that represents a vector.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.has_storage","page":"Solver State","title":"Manopt.has_storage","text":"has_storage(a::AbstractStateAction, key::Symbol)\n\nReturn whether the AbstractStateAction a has a value stored at the Symbol key.\n\n\n\n\n\nhas_storage(a::AbstractStateAction, ::PointStorageKey{key}) where {key}\n\nReturn whether the AbstractStateAction a has a point value stored at the Symbol key.\n\n\n\n\n\nhas_storage(a::AbstractStateAction, ::VectorStorageKey{key}) where {key}\n\nReturn whether the AbstractStateAction a has a point value stored at the Symbol key.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.update_storage!","page":"Solver State","title":"Manopt.update_storage!","text":"update_storage!(a::AbstractStateAction, amp::AbstractManoptProblem, s::AbstractManoptSolverState)\n\nUpdate the AbstractStateAction a internal values to the ones given on the AbstractManoptSolverState s. Optimized using the information from amp\n\n\n\n\n\nupdate_storage!(a::AbstractStateAction, d::Dict{Symbol,<:Any})\n\nUpdate the AbstractStateAction a internal values to the ones given in the dictionary d. The values are merged, where the values from d are preferred.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.PointStorageKey","page":"Solver State","title":"Manopt.PointStorageKey","text":"struct PointStorageKey{key} end\n\nRefer to point storage of StoreStateAction in get_storage and has_storage functions\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.VectorStorageKey","page":"Solver State","title":"Manopt.VectorStorageKey","text":"struct VectorStorageKey{key} end\n\nRefer to tangent storage of StoreStateAction in get_storage and has_storage functions\n\n\n\n\n\n","category":"type"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"as well as two internal functions","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"_storage_copy_vector\n_storage_copy_point","category":"page"},{"location":"plans/state/#Manopt._storage_copy_vector","page":"Solver State","title":"Manopt._storage_copy_vector","text":"_storage_copy_vector(M::AbstractManifold, X)\n\nMake a copy of tangent vector X from manifold M for storage in StoreStateAction.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt._storage_copy_point","page":"Solver State","title":"Manopt._storage_copy_point","text":"_storage_copy_point(M::AbstractManifold, p)\n\nMake a copy of point p from manifold M for storage in StoreStateAction.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Abstract-states","page":"Solver State","title":"Abstract states","text":"","category":"section"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"In a few cases it is useful to have a hierarchy of types. These are","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"AbstractSubProblemSolverState\nAbstractGradientSolverState\nAbstractHessianSolverState\nAbstractPrimalDualSolverState","category":"page"},{"location":"plans/state/#Manopt.AbstractSubProblemSolverState","page":"Solver State","title":"Manopt.AbstractSubProblemSolverState","text":"AbstractSubProblemSolverState <: AbstractManoptSolverState\n\nAn abstract type for solvers that involve a subsolver.\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.AbstractGradientSolverState","page":"Solver State","title":"Manopt.AbstractGradientSolverState","text":"AbstractGradientSolverState <: AbstractManoptSolverState\n\nA generic AbstractManoptSolverState type for gradient based options data.\n\nIt assumes that\n\nthe iterate is stored in the field p\nthe gradient at p is stored in X.\n\nSee also\n\nGradientDescentState, StochasticGradientDescentState, SubGradientMethodState, QuasiNewtonState.\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.AbstractHessianSolverState","page":"Solver State","title":"Manopt.AbstractHessianSolverState","text":"AbstractHessianSolverState <: AbstractGradientSolverState\n\nAn AbstractManoptSolverState type to represent algorithms that employ the Hessian. These options are assumed to have a field (gradient) to store the current gradient operatornamegradf(x)\n\n\n\n\n\n","category":"type"},{"location":"plans/state/#Manopt.AbstractPrimalDualSolverState","page":"Solver State","title":"Manopt.AbstractPrimalDualSolverState","text":"AbstractPrimalDualSolverState\n\nA general type for all primal dual based options to be used within primal dual based algorithms\n\n\n\n\n\n","category":"type"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"For the sub problem state, there are two access functions","category":"page"},{"location":"plans/state/","page":"Solver State","title":"Solver State","text":"get_sub_problem\nget_sub_state","category":"page"},{"location":"plans/state/#Manopt.get_sub_problem","page":"Solver State","title":"Manopt.get_sub_problem","text":"get_sub_problem(ams::AbstractSubProblemSolverState)\n\nAccess the sub problem of a solver state that involves a sub optimisation task. By default this returns ams.sub_problem.\n\n\n\n\n\n","category":"function"},{"location":"plans/state/#Manopt.get_sub_state","page":"Solver State","title":"Manopt.get_sub_state","text":"get_sub_state(ams::AbstractSubProblemSolverState)\n\nAccess the sub state of a solver state that involves a sub optimisation task. By default this returns ams.sub_state.\n\n\n\n\n\n","category":"function"},{"location":"about/#About","page":"About","title":"About","text":"","category":"section"},{"location":"about/","page":"About","title":"About","text":"Manopt.jl inherited its name from Manopt, a Matlab toolbox for optimization on manifolds. This Julia package was started and is currently maintained by Ronny Bergmann.","category":"page"},{"location":"about/#Contributors","page":"About","title":"Contributors","text":"","category":"section"},{"location":"about/","page":"About","title":"About","text":"Thanks to the following contributors to Manopt.jl:","category":"page"},{"location":"about/","page":"About","title":"About","text":"Constantin Ahlmann-Eltze implemented the gradient and differential check functions\nRenée Dornig implemented the particle swarm, the Riemannian Augmented Lagrangian Method, the Exact Penalty Method, as well as the NonmonotoneLinesearch. These solvers are also the first one with modular/exchangable sub solvers.\nWillem Diepeveen implemented the primal-dual Riemannian semismooth Newton solver.\nHajg Jasa implemented the convex bundle method and the proximal bundle method and a default subsolver each of them.\nEven Stephansen Kjemsås contributed to the implementation of the Frank Wolfe Method solver.\nMathias Ravn Munkvold contributed most of the implementation of the Adaptive Regularization with Cubics solver as well as its Lanczos subsolver\nTom-Christian Riemer implemented the trust regions and quasi Newton solvers as well as the truncated conjugate gradient descent subsolver.\nMarkus A. Stokkenes contributed most of the implementation of the Interior Point Newton Method as well as its default Conjugate Residual subsolver\nManuel Weiss implemented most of the conjugate gradient update rules","category":"page"},{"location":"about/","page":"About","title":"About","text":"as well as various contributors providing small extensions, finding small bugs and mistakes and fixing them by opening PRs. Thanks to all of you.","category":"page"},{"location":"about/","page":"About","title":"About","text":"If you want to contribute a manifold or algorithm or have any questions, visit the GitHub repository to clone/fork the repository or open an issue.","category":"page"},{"location":"about/#Work-using-Manopt.jl","page":"About","title":"Work using Manopt.jl","text":"","category":"section"},{"location":"about/","page":"About","title":"About","text":"ExponentialFamilyProjection.jl package uses Manopt.jl to project arbitrary functions onto the closest exponential family distributions. The package also integrates with RxInfer.jl to enable Bayesian inference in a larger set of probabilistic models.\nCaesar.jl within non-Gaussian factor graph inference algorithms","category":"page"},{"location":"about/","page":"About","title":"About","text":"Is a package missing? Open an issue! It would be great to collect anything and anyone using Manopt.jl","category":"page"},{"location":"about/#Further-packages","page":"About","title":"Further packages","text":"","category":"section"},{"location":"about/","page":"About","title":"About","text":"Manopt.jl belongs to the Manopt family:","category":"page"},{"location":"about/","page":"About","title":"About","text":"manopt.org The Matlab version of Manopt, see also their :octocat: GitHub repository\npymanopt.org The Python version of Manopt providing also several AD backends, see also their :octocat: GitHub repository","category":"page"},{"location":"about/","page":"About","title":"About","text":"but there are also more packages providing tools on manifolds in other languages","category":"page"},{"location":"about/","page":"About","title":"About","text":"Jax Geometry (Python/Jax) for differential geometry and stochastic dynamics with deep learning\nGeomstats (Python with several backends) focusing on statistics and machine learning :octocat: GitHub repository\nGeoopt (Python & PyTorch) Riemannian ADAM & SGD. :octocat: GitHub repository\nMcTorch (Python & PyToch) Riemannian SGD, Adagrad, ASA & CG.\nROPTLIB (C++) a Riemannian OPTimization LIBrary :octocat: GitHub repository\nTF Riemopt (Python & TensorFlow) Riemannian optimization using TensorFlow","category":"page"},{"location":"tutorials/GeodesicRegression/#How-to-perform-Geodesic-Regression","page":"Do geodesic regression","title":"How to perform Geodesic Regression","text":"","category":"section"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Geodesic regression generalizes linear regression to Riemannian manifolds. Let’s first phrase it informally as follows:","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For given data points d_1ldotsd_n on a Riemannian manifold mathcal M, find the geodesic that “best explains” the data.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"The meaning of “best explain” has still to be clarified. We distinguish two cases: time labelled data and unlabelled data","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":" using Manopt, ManifoldDiff, Manifolds, Random, Colors\n using LinearAlgebra: svd\n Random.seed!(42);","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"We use the following data, where we want to highlight one of the points.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"n = 7\nσ = π / 8\nS = Sphere(2)\nbase = 1 / sqrt(2) * [1.0, 0.0, 1.0]\ndir = [-0.75, 0.5, 0.75]\ndata_orig = [exp(S, base, dir, t) for t in range(-0.5, 0.5; length=n)]\n# add noise to the points on the geodesic\ndata = map(p -> exp(S, p, rand(S; vector_at=p, σ=σ)), data_orig)\nhighlighted = 4;","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"(Image: The given data)","category":"page"},{"location":"tutorials/GeodesicRegression/#Time-Labeled-Data","page":"Do geodesic regression","title":"Time Labeled Data","text":"","category":"section"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"If for each data item d_i we are also given a time point t_iinmathbb R, which are pairwise different, then we can use the least squares error to state the objective function as [Fle13]","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"F(pX) = frac12sum_i=1^n d_mathcal M^2(γ_pX(t_i) d_i)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"where d_mathcal M is the Riemannian distance and γ_pX is the geodesic with γ(0) = p and dotgamma(0) = X.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For the real-valued case mathcal M = mathbb R^m the solution (p^* X^*) is given in closed form as follows: with d^* = frac1ndisplaystylesum_i=1^nd_i and t^* = frac1ndisplaystylesum_i=1^n t_i we get","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":" X^* = fracsum_i=1^n (d_i-d^*)(t-t^*)sum_i=1^n (t_i-t^*)^2\nquadtext and quad\np^* = d^* - t^*X^*","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"and hence the linear regression result is the line γ_p^*X^*(t) = p^* + tX^*.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"On a Riemannian manifold we can phrase this as an optimization problem on the tangent bundle, which is the disjoint union of all tangent spaces, as","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"operatorname*argmin_(pX) in mathrmTmathcal M F(pX)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Due to linearity, the gradient of F(pX) is the sum of the single gradients of","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":" frac12d_mathcal M^2bigl(γ_pX(t_i)d_ibigr)\n = frac12d_mathcal M^2bigl(exp_p(t_iX)d_ibigr)\n quad i1ldotsn","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"which can be computed using a chain rule of the squared distance and the exponential map, see for example [BG18] for details or Equations (7) and (8) of [Fle13]:","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"M = TangentBundle(S)\nstruct RegressionCost{T,S}\n data::T\n times::S\nend\nRegressionCost(data::T, times::S) where {T,S} = RegressionCost{T,S}(data, times)\nfunction (a::RegressionCost)(M, x)\n pts = [geodesic(M.manifold, x[M, :point], x[M, :vector], ti) for ti in a.times]\n return 1 / 2 * sum(distance.(Ref(M.manifold), pts, a.data) .^ 2)\nend\nstruct RegressionGradient!{T,S}\n data::T\n times::S\nend\nfunction RegressionGradient!(data::T, times::S) where {T,S}\n return RegressionGradient!{T,S}(data, times)\nend\nfunction (a::RegressionGradient!)(M, Y, x)\n pts = [geodesic(M.manifold, x[M, :point], x[M, :vector], ti) for ti in a.times]\n gradients = grad_distance.(Ref(M.manifold), a.data, pts)\n Y[M, :point] .= sum(\n ManifoldDiff.adjoint_differential_exp_basepoint.(\n Ref(M.manifold),\n Ref(x[M, :point]),\n [ti * x[M, :vector] for ti in a.times],\n gradients,\n ),\n )\n Y[M, :vector] .= sum(\n ManifoldDiff.adjoint_differential_exp_argument.(\n Ref(M.manifold),\n Ref(x[M, :point]),\n [ti * x[M, :vector] for ti in a.times],\n gradients,\n ),\n )\n return Y\nend","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For the Euclidean case, the result is given by the first principal component of a principal component analysis, see PCR which is given by p^* = frac1ndisplaystylesum_i=1^n d_i and the direction X^* is obtained by defining the zero mean data matrix","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"D = bigl(d_1-p^* ldots d_n-p^*bigr) in mathbb R^mn","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"and taking X^* as an eigenvector to the largest eigenvalue of D^mathrmTD.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"We can do something similar, when considering the tangent space at the (Riemannian) mean of the data and then do a PCA on the coordinate coefficients with respect to a basis.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"m = mean(S, data)\nA = hcat(\n map(x -> get_coordinates(S, m, log(S, m, x), DefaultOrthonormalBasis()), data)...\n)\npca1 = get_vector(S, m, svd(A).U[:, 1], DefaultOrthonormalBasis())\nx0 = ArrayPartition(m, pca1)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"([0.6998621681746481, -0.013681674945026638, 0.7141468737791822], [0.5931302057517893, -0.5459465115717783, -0.5917254139611094])","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"The optimal “time labels” are then just the projections t_i = d_iX^*, i=1ldotsn.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"t = map(d -> inner(S, m, pca1, log(S, m, d)), data)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"7-element Vector{Float64}:\n 1.0763904949888323\n 0.4594060193318443\n -0.5030195874833682\n 0.02135686940521725\n -0.6158692507563633\n -0.24431652575028764\n -0.2259012492666664","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"And we can call the gradient descent. Note that since gradF! works in place of Y, we have to set the evaluation type accordingly.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"y = gradient_descent(\n M,\n RegressionCost(data, t),\n RegressionGradient!(data, t),\n x0;\n evaluation=InplaceEvaluation(),\n stepsize=ArmijoLinesearch(\n M;\n initial_stepsize=1.0,\n contraction_factor=0.990,\n sufficient_decrease=0.05,\n stop_when_stepsize_less=1e-9,\n ),\n stopping_criterion=StopAfterIteration(200) |\n StopWhenGradientNormLess(1e-8) |\n StopWhenStepsizeLess(1e-9),\n debug=[:Iteration, \" | \", :Cost, \"\\n\", :Stop, 50],\n)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Initial | f(x): 0.142862\n# 50 | f(x): 0.141113\n# 100 | f(x): 0.141113\n# 150 | f(x): 0.141113\n# 200 | f(x): 0.141113\nThe algorithm reached its maximal number of iterations (200).\n\n([0.7119768725361988, 0.009463059143003981, 0.7021391482357537], [0.590008151835008, -0.5543272518659472, -0.5908038715512287])","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For the result, we can generate and plot all involved geodesics","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"dense_t = range(-0.5, 0.5; length=100)\ngeo = geodesic(S, y[M, :point], y[M, :vector], dense_t)\ninit_geo = geodesic(S, x0[M, :point], x0[M, :vector], dense_t)\ngeo_pts = geodesic(S, y[M, :point], y[M, :vector], t)\ngeo_conn_highlighted = shortest_geodesic(\n S, data[highlighted], geo_pts[highlighted], 0.5 .+ dense_t\n);","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"(Image: Result of Geodesic Regression)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"In this image, together with the blue data points, you see the geodesic of the initialization in black (evaluated on -frac12frac12), the final point on the tangent bundle in orange, as well as the resulting regression geodesic in teal, (on the same interval as the start) as well as small teal points indicating the time points on the geodesic corresponding to the data. Additionally, a thin blue line indicates the geodesic between a data point and its corresponding data point on the geodesic. While this would be the closest point in Euclidean space and hence the two directions (along the geodesic vs. to the data point) orthogonal, here we have","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"inner(\n S,\n geo_pts[highlighted],\n log(S, geo_pts[highlighted], geo_pts[highlighted + 1]),\n log(S, geo_pts[highlighted], data[highlighted]),\n)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"0.002487393068917863","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"But we also started with one of the best scenarios of equally spaced points on a geodesic obstructed by noise.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"This gets worse if you start with less evenly distributed data","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"data2 = [exp(S, base, dir, t) for t in [-0.5, -0.49, -0.48, 0.1, 0.48, 0.49, 0.5]]\ndata2 = map(p -> exp(S, p, rand(S; vector_at=p, σ=σ / 2)), data2)\nm2 = mean(S, data2)\nA2 = hcat(\n map(x -> get_coordinates(S, m, log(S, m, x), DefaultOrthonormalBasis()), data2)...\n)\npca2 = get_vector(S, m, svd(A2).U[:, 1], DefaultOrthonormalBasis())\nx1 = ArrayPartition(m, pca2)\nt2 = map(d -> inner(S, m2, pca2, log(S, m2, d)), data2)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"7-element Vector{Float64}:\n 0.8226008307680276\n 0.470952643700004\n 0.7974195537403082\n 0.01533949241264346\n -0.6546705405852389\n -0.8913273825362389\n -0.5775954445730889","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"then we run again","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"y2 = gradient_descent(\n M,\n RegressionCost(data2, t2),\n RegressionGradient!(data2, t2),\n x1;\n evaluation=InplaceEvaluation(),\n stepsize=ArmijoLinesearch(\n M;\n initial_stepsize=1.0,\n contraction_factor=0.990,\n sufficient_decrease=0.05,\n stop_when_stepsize_less=1e-9,\n ),\n stopping_criterion=StopAfterIteration(200) |\n StopWhenGradientNormLess(1e-8) |\n StopWhenStepsizeLess(1e-9),\n debug=[:Iteration, \" | \", :Cost, \"\\n\", :Stop, 3],\n);","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Initial | f(x): 0.089844\n# 3 | f(x): 0.085364\n# 6 | f(x): 0.085364\n# 9 | f(x): 0.085364\n# 12 | f(x): 0.085364\n# 15 | f(x): 0.085364\n# 18 | f(x): 0.085364\n# 21 | f(x): 0.085364\n# 24 | f(x): 0.085364\n# 27 | f(x): 0.085364\n# 30 | f(x): 0.085364\n# 33 | f(x): 0.085364\n# 36 | f(x): 0.085364\n# 39 | f(x): 0.085364\n# 42 | f(x): 0.085364\n# 45 | f(x): 0.085364\n# 48 | f(x): 0.085364\n# 51 | f(x): 0.085364\n# 54 | f(x): 0.085364\n# 57 | f(x): 0.085364\n# 60 | f(x): 0.085364\n# 63 | f(x): 0.085364\n# 66 | f(x): 0.085364\n# 69 | f(x): 0.085364\n# 72 | f(x): 0.085364\n# 75 | f(x): 0.085364\n# 78 | f(x): 0.085364\n# 81 | f(x): 0.085364\n# 84 | f(x): 0.085364\n# 87 | f(x): 0.085364\n# 90 | f(x): 0.085364\n# 93 | f(x): 0.085364\n# 96 | f(x): 0.085364\n# 99 | f(x): 0.085364\n# 102 | f(x): 0.085364\n# 105 | f(x): 0.085364\n# 108 | f(x): 0.085364\n# 111 | f(x): 0.085364\n# 114 | f(x): 0.085364\n# 117 | f(x): 0.085364\n# 120 | f(x): 0.085364\n# 123 | f(x): 0.085364\n# 126 | f(x): 0.085364\n# 129 | f(x): 0.085364\n# 132 | f(x): 0.085364\n# 135 | f(x): 0.085364\n# 138 | f(x): 0.085364\n# 141 | f(x): 0.085364\n# 144 | f(x): 0.085364\n# 147 | f(x): 0.085364\n# 150 | f(x): 0.085364\n# 153 | f(x): 0.085364\n# 156 | f(x): 0.085364\n# 159 | f(x): 0.085364\n# 162 | f(x): 0.085364\n# 165 | f(x): 0.085364\n# 168 | f(x): 0.085364\n# 171 | f(x): 0.085364\n# 174 | f(x): 0.085364\n# 177 | f(x): 0.085364\n# 180 | f(x): 0.085364\n# 183 | f(x): 0.085364\n# 186 | f(x): 0.085364\n# 189 | f(x): 0.085364\n# 192 | f(x): 0.085364\n# 195 | f(x): 0.085364\n# 198 | f(x): 0.085364\nThe algorithm reached its maximal number of iterations (200).","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For plotting we again generate all data","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"geo2 = geodesic(S, y2[M, :point], y2[M, :vector], dense_t)\ninit_geo2 = geodesic(S, x1[M, :point], x1[M, :vector], dense_t)\ngeo_pts2 = geodesic(S, y2[M, :point], y2[M, :vector], t2)\ngeo_conn_highlighted2 = shortest_geodesic(\n S, data2[highlighted], geo_pts2[highlighted], 0.5 .+ dense_t\n);","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"(Image: A second result with different time points)","category":"page"},{"location":"tutorials/GeodesicRegression/#Unlabeled-Data","page":"Do geodesic regression","title":"Unlabeled Data","text":"","category":"section"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"If we are not given time points t_i, then the optimization problem extends, informally speaking, to also finding the “best fitting” (in the sense of smallest error). To formalize, the objective function here reads","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"F(p X t) = frac12sum_i=1^n d_mathcal M^2(γ_pX(t_i) d_i)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"where t = (t_1ldotst_n) in mathbb R^n is now an additional parameter of the objective function. We write F_1(p X) to refer to the function on the tangent bundle for fixed values of t (as the one in the last part) and F_2(t) for the function F(p X t) as a function in t with fixed values (p X).","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"For the Euclidean case, there is no necessity to optimize with respect to t, as we saw above for the initialization of the fixed time points.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"On a Riemannian manifold this can be stated as a problem on the product manifold mathcal N = mathrmTmathcal M times mathbb R^n, i.e.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"N = M × Euclidean(length(t2))","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"ProductManifold with 2 submanifolds:\n TangentBundle(Sphere(2, ℝ))\n Euclidean(7; field=ℝ)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":" operatorname*argmin_bigl((pX)tbigr)inmathcal N F(p X t)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"In this tutorial we present an approach to solve this using an alternating gradient descent scheme. To be precise, we define the cost function now on the product manifold","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"struct RegressionCost2{T}\n data::T\nend\nRegressionCost2(data::T) where {T} = RegressionCost2{T}(data)\nfunction (a::RegressionCost2)(N, x)\n TM = N[1]\n pts = [\n geodesic(TM.manifold, x[N, 1][TM, :point], x[N, 1][TM, :vector], ti) for\n ti in x[N, 2]\n ]\n return 1 / 2 * sum(distance.(Ref(TM.manifold), pts, a.data) .^ 2)\nend","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"The gradient in two parts, namely (a) the same gradient as before w.r.t. (pX) Tmathcal M, just now with a fixed t in mind for the second component of the product manifold mathcal N","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"struct RegressionGradient2a!{T}\n data::T\nend\nRegressionGradient2a!(data::T) where {T} = RegressionGradient2a!{T}(data)\nfunction (a::RegressionGradient2a!)(N, Y, x)\n TM = N[1]\n p = x[N, 1]\n pts = [geodesic(TM.manifold, p[TM, :point], p[TM, :vector], ti) for ti in x[N, 2]]\n gradients = Manopt.grad_distance.(Ref(TM.manifold), a.data, pts)\n Y[TM, :point] .= sum(\n ManifoldDiff.adjoint_differential_exp_basepoint.(\n Ref(TM.manifold),\n Ref(p[TM, :point]),\n [ti * p[TM, :vector] for ti in x[N, 2]],\n gradients,\n ),\n )\n Y[TM, :vector] .= sum(\n ManifoldDiff.adjoint_differential_exp_argument.(\n Ref(TM.manifold),\n Ref(p[TM, :point]),\n [ti * p[TM, :vector] for ti in x[N, 2]],\n gradients,\n ),\n )\n return Y\nend","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Finally, we additionally look for a fixed point x=(pX) mathrmTmathcal M at the gradient with respect to tmathbb R^n, the second component, which is given by","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":" (operatornamegradF_2(t))_i\n = - dot γ_pX(t_i) log_γ_pX(t_i)d_i_γ_pX(t_i) i = 1 ldots n","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"struct RegressionGradient2b!{T}\n data::T\nend\nRegressionGradient2b!(data::T) where {T} = RegressionGradient2b!{T}(data)\nfunction (a::RegressionGradient2b!)(N, Y, x)\n TM = N[1]\n p = x[N, 1]\n pts = [geodesic(TM.manifold, p[TM, :point], p[TM, :vector], ti) for ti in x[N, 2]]\n logs = log.(Ref(TM.manifold), pts, a.data)\n pt = map(\n d -> vector_transport_to(TM.manifold, p[TM, :point], p[TM, :vector], d), pts\n )\n Y .= -inner.(Ref(TM.manifold), pts, logs, pt)\n return Y\nend","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"We can reuse the computed initial values from before, just that now we are on a product manifold","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"x2 = ArrayPartition(x1, t2)\nF3 = RegressionCost2(data2)\ngradF3_vector = [RegressionGradient2a!(data2), RegressionGradient2b!(data2)];","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"and we run the algorithm","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"y3 = alternating_gradient_descent(\n N,\n F3,\n gradF3_vector,\n x2;\n evaluation=InplaceEvaluation(),\n debug=[:Iteration, \" | \", :Cost, \"\\n\", :Stop, 50],\n stepsize=ArmijoLinesearch(\n M;\n contraction_factor=0.999,\n sufficient_decrease=0.066,\n stop_when_stepsize_less=1e-11,\n retraction_method=ProductRetraction(SasakiRetraction(2), ExponentialRetraction()),\n ),\n inner_iterations=1,\n)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Initial | f(x): 0.089844\n# 50 | f(x): 0.091097\n# 100 | f(x): 0.091097\nThe algorithm reached its maximal number of iterations (100).\n\n(ArrayPartition{Float64, Tuple{Vector{Float64}, Vector{Float64}}}(([0.750222090700214, 0.031464227399200885, 0.6604368380243274], [0.6636489079535082, -0.3497538263293046, -0.737208025444054])), [0.7965909273713889, 0.43402264218923514, 0.755822122896529, 0.001059348203453764, -0.6421135044471217, -0.8635572995105818, -0.5546338813212247])","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"which we render can collect into an image creating the geodesics again","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"geo3 = geodesic(S, y3[N, 1][M, :point], y3[N, 1][M, :vector], dense_t)\ninit_geo3 = geodesic(S, x1[M, :point], x1[M, :vector], dense_t)\ngeo_pts3 = geodesic(S, y3[N, 1][M, :point], y3[N, 1][M, :vector], y3[N, 2])\nt3 = y3[N, 2]\ngeo_conns = shortest_geodesic.(Ref(S), data2, geo_pts3, Ref(0.5 .+ 4*dense_t));","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"which yields","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"(Image: The third result)","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Note that the geodesics from the data to the regression geodesic meet at a nearly orthogonal angle.","category":"page"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"Acknowledgement. Parts of this tutorial are based on the bachelor thesis of Jeremias Arf.","category":"page"},{"location":"tutorials/GeodesicRegression/#Literature","page":"Do geodesic regression","title":"Literature","text":"","category":"section"},{"location":"tutorials/GeodesicRegression/","page":"Do geodesic regression","title":"Do geodesic regression","text":"R. Bergmann and P.-Y. Gousenbourger. A variational model for data fitting on manifolds by minimizing the acceleration of a Bézier curve. Frontiers in Applied Mathematics and Statistics 4 (2018), arXiv:1807.10090.\n\n\n\nP. T. Fletcher. Geodesic regression and the theory of least squares on Riemannian manifolds. International Journal of Computer Vision 105, 171–185 (2013).\n\n\n\n","category":"page"},{"location":"solvers/FrankWolfe/#Frank—Wolfe-method","page":"Frank-Wolfe","title":"Frank—Wolfe method","text":"","category":"section"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"Frank_Wolfe_method\nFrank_Wolfe_method!","category":"page"},{"location":"solvers/FrankWolfe/#Manopt.Frank_Wolfe_method","page":"Frank-Wolfe","title":"Manopt.Frank_Wolfe_method","text":"Frank_Wolfe_method(M, f, grad_f, p=rand(M))\nFrank_Wolfe_method(M, gradient_objective, p=rand(M); kwargs...)\nFrank_Wolfe_method!(M, f, grad_f, p; kwargs...)\nFrank_Wolfe_method!(M, gradient_objective, p; kwargs...)\n\nPerform the Frank-Wolfe algorithm to compute for mathcal C mathcal M the constrained problem\n\n operatorname*argmin_pmathcal C f(p)\n\nwhere the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)\n\n operatorname*argmin_q C operatornamegrad f(p_k) log_p_kq\n\nfor every iterate p_k together with a stepsize s_k1. The algorhtm can be performed in-place of p.\n\nThis algorithm is inspired by but slightly more general than [WS22].\n\nThe next iterate is then given by p_k+1 = γ_p_kq_k(s_k), where by default γ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nAlternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=DecreasingStepsize(; length=2.0, shift=2): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1.0e-6)): a functor indicating that the stopping criterion is fulfilled\nsub_cost=FrankWolfeCost(p, X): the cost of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=FrankWolfeGradient(p, X): the gradient of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective=ManifoldGradientObjective(sub_cost, sub_gradient): the objective for the Frank-Wolfe sub problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=GradientDescentState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\nsub_stopping_criterion=[StopAfterIteration](@ref)(300)[ | ](@ref StopWhenAny)[StopWhenStepsizeLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nOutput\n\nthe obtained (approximate) minimizer p^*, see get_solver_return for details\n\n\n\n\n\n","category":"function"},{"location":"solvers/FrankWolfe/#Manopt.Frank_Wolfe_method!","page":"Frank-Wolfe","title":"Manopt.Frank_Wolfe_method!","text":"Frank_Wolfe_method(M, f, grad_f, p=rand(M))\nFrank_Wolfe_method(M, gradient_objective, p=rand(M); kwargs...)\nFrank_Wolfe_method!(M, f, grad_f, p; kwargs...)\nFrank_Wolfe_method!(M, gradient_objective, p; kwargs...)\n\nPerform the Frank-Wolfe algorithm to compute for mathcal C mathcal M the constrained problem\n\n operatorname*argmin_pmathcal C f(p)\n\nwhere the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)\n\n operatorname*argmin_q C operatornamegrad f(p_k) log_p_kq\n\nfor every iterate p_k together with a stepsize s_k1. The algorhtm can be performed in-place of p.\n\nThis algorithm is inspired by but slightly more general than [WS22].\n\nThe next iterate is then given by p_k+1 = γ_p_kq_k(s_k), where by default γ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nAlternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=DecreasingStepsize(; length=2.0, shift=2): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1.0e-6)): a functor indicating that the stopping criterion is fulfilled\nsub_cost=FrankWolfeCost(p, X): the cost of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_grad=FrankWolfeGradient(p, X): the gradient of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_objective=ManifoldGradientObjective(sub_cost, sub_gradient): the objective for the Frank-Wolfe sub problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=GradientDescentState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\nsub_stopping_criterion=[StopAfterIteration](@ref)(300)[ | ](@ref StopWhenAny)[StopWhenStepsizeLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nIf you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.\n\nOutput\n\nthe obtained (approximate) minimizer p^*, see get_solver_return for details\n\n\n\n\n\n","category":"function"},{"location":"solvers/FrankWolfe/#State","page":"Frank-Wolfe","title":"State","text":"","category":"section"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"FrankWolfeState","category":"page"},{"location":"solvers/FrankWolfe/#Manopt.FrankWolfeState","page":"Frank-Wolfe","title":"Manopt.FrankWolfeState","text":"FrankWolfeState <: AbstractManoptSolverState\n\nA struct to store the current state of the Frank_Wolfe_method\n\nIt comes in two forms, depending on the realisation of the subproblem.\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\n\nThe sub task requires a method to solve\n\n operatorname*argmin_q C operatornamegrad f(p_k) log_p_kq\n\nConstructor\n\nFrankWolfeState(M, sub_problem, sub_state; kwargs...)\n\nInitialise the Frank Wolfe method state.\n\nFrankWolfeState(M, sub_problem; evaluation=AllocatingEvaluation(), kwargs...)\n\nInitialise the Frank Wolfe method state, where sub_problem is a closed form solution with evaluation as type of evaluation.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nsub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nKeyword arguments\n\np=rand(M): a point on the manifold mathcal Mto specify the initial value\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, FrankWolfeState): a functor inheriting from Stepsize to determine a step size\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nwhere the remaining fields from before are keyword arguments.\n\n\n\n\n\n","category":"type"},{"location":"solvers/FrankWolfe/#Helpers","page":"Frank-Wolfe","title":"Helpers","text":"","category":"section"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"For the inner sub-problem you can easily create the corresponding cost and gradient using","category":"page"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"FrankWolfeCost\nFrankWolfeGradient","category":"page"},{"location":"solvers/FrankWolfe/#Manopt.FrankWolfeCost","page":"Frank-Wolfe","title":"Manopt.FrankWolfeCost","text":"FrankWolfeCost{P,T}\n\nA structure to represent the oracle sub problem in the Frank_Wolfe_method. The cost function reads\n\nF(q) = X log_p q\n\nThe values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.\n\n\n\n\n\n","category":"type"},{"location":"solvers/FrankWolfe/#Manopt.FrankWolfeGradient","page":"Frank-Wolfe","title":"Manopt.FrankWolfeGradient","text":"FrankWolfeGradient{P,T}\n\nA structure to represent the gradient of the oracle sub problem in the Frank_Wolfe_method, that is for a given point p and a tangent vector X the function reads\n\nF(q) = X log_p q\n\nIts gradient can be computed easily using adjoint_differential_log_argument.\n\nThe values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.\n\n\n\n\n\n","category":"type"},{"location":"solvers/FrankWolfe/","page":"Frank-Wolfe","title":"Frank-Wolfe","text":"M. Weber and S. Sra. Riemannian Optimization via Frank-Wolfe Methods. Mathematical Programming 199, 525–556 (2022).\n\n\n\n","category":"page"},{"location":"tutorials/ImplementASolver/#How-to-implementing-your-own-solver","page":"Implement a solver","title":"How to implementing your own solver","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"When you have used a few solvers from Manopt.jl for example like in the opening tutorial Get started: optimize! you might come to the idea of implementing a solver yourself.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"After a short introduction of the algorithm we aim to implement, this tutorial first discusses the structural details, for example what a solver consists of and “works with”. Afterwards, we show how to implement the algorithm. Finally, we discuss how to make the algorithm both nice for the user as well as initialized in a way, that it can benefit from features already available in Manopt.jl.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"note: Note\nIf you have implemented your own solver, we would be very happy to have that within Manopt.jl as well, so maybe consider opening a Pull Request","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"using Manopt, Manifolds, Random","category":"page"},{"location":"tutorials/ImplementASolver/#Our-guiding-example:-a-random-walk-minimization","page":"Implement a solver","title":"Our guiding example: a random walk minimization","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Since most serious algorithms should be implemented in Manopt.jl themselves directly, we implement a solver that randomly walks on the manifold and keeps track of the lowest point visited. As for algorithms in Manopt.jl we aim to implement this generically for any manifold that is implemented using ManifoldsBase.jl.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"The random walk minimization","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Given:","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"a manifold mathcal M\na starting point p=p^(0)\na cost function f mathcal M ℝ.\na parameter sigma 0.\na retraction operatornameretr_p(X) that maps X T_pmathcal M to the manifold.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We can run the following steps of the algorithm","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"set k=0\nset our best point q = p^(0)\nRepeat until a stopping criterion is fulfilled\nChoose a random tangent vector X^(k) T_p^(k)mathcal M of length lVert X^(k) rVert = sigma\n“Walk” along this direction, that is p^(k+1) = operatornameretr_p^(k)(X^(k))\nIf f(p^(k+1)) f(q) set q = p^{(k+1)}$ as our new best visited point\nReturn q as the resulting best point we visited","category":"page"},{"location":"tutorials/ImplementASolver/#Preliminaries:-elements-a-solver-works-on","page":"Implement a solver","title":"Preliminaries: elements a solver works on","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"There are two main ingredients a solver needs: a problem to work on and the state of a solver, which “identifies” the solver and stores intermediate results.","category":"page"},{"location":"tutorials/ImplementASolver/#Specifying-the-task:-an-AbstractManoptProblem","page":"Implement a solver","title":"Specifying the task: an AbstractManoptProblem","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"A problem in Manopt.jl usually consists of a manifold (an AbstractManifold) and an AbstractManifoldObjective describing the function we have and its features. In our case the objective is (just) a ManifoldCostObjective that stores cost function f(M,p) -> R. More generally, it might for example store a gradient function or the Hessian or any other information we have about our task.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"This is something independent of the solver itself, since it only identifies the problem we want to solve independent of how we want to solve it, or in other words, this type contains all information that is static and independent of the specific solver at hand.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Usually the problems variable is called mp.","category":"page"},{"location":"tutorials/ImplementASolver/#Specifying-a-solver:-an-AbstractManoptSolverState","page":"Implement a solver","title":"Specifying a solver: an AbstractManoptSolverState","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Everything that is needed by a solver during the iterations, all its parameters, interim values that are needed beyond just one iteration, is stored in a subtype of the AbstractManoptSolverState. This identifies the solver uniquely.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"In our case we want to store five things","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"the current iterate p=p^(k)\nthe best visited point q\nthe variable sigma 0\nthe retraction operatornameretr to use (cf. retractions and inverse retractions)\na criterion, when to stop: a StoppingCriterion","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We can defined this as","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"mutable struct RandomWalkState{\n P,\n R<:AbstractRetractionMethod,\n S<:StoppingCriterion,\n} <: AbstractManoptSolverState\n p::P\n q::P\n σ::Float64\n retraction_method::R\n stop::S\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"The stopping criterion is usually stored in the state’s stop field. If you have a reason to do otherwise, you have one more function to implement (see next section). For ease of use, a constructor can be provided, that for example chooses a good default for the retraction based on a given manifold.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"function RandomWalkState(M::AbstractManifold, p::P=rand(M);\n σ = 0.1,\n retraction_method::R=default_retraction_method(M, typeof(p)),\n stopping_criterion::S=StopAfterIteration(200)\n) where {P, R<:AbstractRetractionMethod, S<:StoppingCriterion}\n return RandomWalkState{P,R,S}(p, copy(M, p), σ, retraction_method, stopping_criterion)\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Parametrising the state avoid that we have abstract typed fields. The keyword arguments for the retraction and stopping criterion are the ones usually used in Manopt.jl and provide an easy way to construct this state now.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"States usually have a shortened name as their variable, we use rws for our state here.","category":"page"},{"location":"tutorials/ImplementASolver/#Implementing-your-solver","page":"Implement a solver","title":"Implementing your solver","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"There is basically only two methods we need to implement for our solver","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"initialize_solver!(mp, rws) which initialises the solver before the first iteration\nstep_solver!(mp, rws, i) which implements the ith iteration, where i is given to you as the third parameter\nget_iterate(rws) which accesses the iterate from other places in the solver\nget_solver_result(rws) returning the solvers final (best) point we reached. By default this would return the last iterate rws.p (or more precisely calls get_iterate), but since we randomly walk and remember our best point in q, this has to return rws.q.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"The first two functions are in-place functions, that is they modify our solver state rws. You implement these by multiple dispatch on the types after importing said functions from Manopt:","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"import Manopt: initialize_solver!, step_solver!, get_iterate, get_solver_result","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"The state we defined before has two fields where we use the common names used in Manopt.jl, that is the StoppingCriterion is usually in stop and the iterate in p. If your choice is different, you need to reimplement","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"stop_solver!(mp, rws, i) to determine whether or not to stop after the ith iteration.\nget_iterate(rws) to access the current iterate","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We recommend to follow the general scheme with the stop field. If you have specific criteria when to stop, consider implementing your own stopping criterion instead.","category":"page"},{"location":"tutorials/ImplementASolver/#Initialization-and-iterate-access","page":"Implement a solver","title":"Initialization and iterate access","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"For our solver, there is not so much to initialize, just to be safe we should copy over the initial value in p we start with, to q. We do not have to care about remembering the iterate, that is done by Manopt.jl. For the iterate access we just have to pass p.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"function initialize_solver!(mp::AbstractManoptProblem, rws::RandomWalkState)\n copyto!(M, rws.q, rws.p) # Set p^{(0)} = q\n return rws\nend\nget_iterate(rws::RandomWalkState) = rws.p\nget_solver_result(rws::RandomWalkState) = rws.q","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"and similarly we implement the step. Here we make use of the fact that the problem (and also the objective in fact) have access functions for their elements, the one we need is get_cost.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"function step_solver!(mp::AbstractManoptProblem, rws::RandomWalkState, i)\n M = get_manifold(mp) # for ease of use get the manifold from the problem\n X = rand(M; vector_at=p) # generate a direction\n X .*= rws.σ/norm(M, p, X)\n # Walk\n retract!(M, rws.p, rws.p, X, rws.retraction_method)\n # is the new point better? Then store it\n if get_cost(mp, rws.p) < get_cost(mp, rws.q)\n copyto!(M, rws.p, rws.q)\n end\n return rws\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Performance wise we could improve the number of allocations by making X also a field of our rws but let’s keep it simple here. We could also store the cost of q in the state, but we shall see how to easily also enable this solver to allow for caching. In practice, however, it is preferable to cache intermediate values like cost of q in the state when it can be easily achieved. This way we do not have to deal with overheads of an external cache.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Now we can just run the solver already. We take the same example as for the other tutorials","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We first define our task, the Riemannian Center of Mass from the Get started: optimize! tutorial.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Random.seed!(23)\nn = 100\nσ = π / 8\nM = Sphere(2)\np = 1 / sqrt(2) * [1.0, 0.0, 1.0]\ndata = [exp(M, p, σ * rand(M; vector_at=p)) for i in 1:n];\nf(M, p) = sum(1 / (2 * n) * distance.(Ref(M), Ref(p), data) .^ 2)","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We can now generate the problem with its objective and the state","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"mp = DefaultManoptProblem(M, ManifoldCostObjective(f))\ns = RandomWalkState(M; σ = 0.2)\n\nsolve!(mp, s)\nget_solver_result(s)","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"3-element Vector{Float64}:\n -0.2412674850987521\n 0.8608618657176527\n -0.44800317943876844","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"The function solve! works also in place of s, but the last line illustrates how to access the result in general; we could also just look at s.p, but the function get_iterate is also used in several other places.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We could for example easily set up a second solver to work from a specified starting point with a different σ like","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"s2 = RandomWalkState(M, [1.0, 0.0, 0.0]; σ = 0.1)\nsolve!(mp, s2)\nget_solver_result(s2)","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"3-element Vector{Float64}:\n 1.0\n 0.0\n 0.0","category":"page"},{"location":"tutorials/ImplementASolver/#Ease-of-use-I:-a-high-level-interface","page":"Implement a solver","title":"Ease of use I: a high level interface","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Manopt.jl offers a few additional features for solvers in their high level interfaces, for example debug= for debug, record= keywords for debug and recording within solver states or count= and cache keywords for the objective.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We can introduce these here as well with just a few lines of code. There are usually two steps. We further need three internal function from Manopt.jl","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"using Manopt: get_solver_return, indicates_convergence, status_summary","category":"page"},{"location":"tutorials/ImplementASolver/#A-high-level-interface-using-the-objective","page":"Implement a solver","title":"A high level interface using the objective","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"This could be considered as an interim step to the high-level interface: if objective, a ManifoldCostObjective is already initialized, the high level interface consists of the steps","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"possibly decorate the objective\ngenerate the problem\ngenerate and possibly generate the state\ncall the solver\ndetermine the return value","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We illustrate the step with an in-place variant here. A variant that keeps the given start point unchanged would just add a copy(M, p) upfront. Manopt.jl provides both variants.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"function random_walk_algorithm!(\n M::AbstractManifold,\n mgo::ManifoldCostObjective,\n p;\n σ = 0.1,\n retraction_method::AbstractRetractionMethod=default_retraction_method(M, typeof(p)),\n stopping_criterion::StoppingCriterion=StopAfterIteration(200),\n kwargs...,\n)\n dmgo = decorate_objective!(M, mgo; kwargs...)\n dmp = DefaultManoptProblem(M, dmgo)\n s = RandomWalkState(M, [1.0, 0.0, 0.0];\n σ=0.1,\n retraction_method=retraction_method, stopping_criterion=stopping_criterion,\n )\n ds = decorate_state!(s; kwargs...)\n solve!(dmp, ds)\n return get_solver_return(get_objective(dmp), ds)\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"random_walk_algorithm! (generic function with 1 method)","category":"page"},{"location":"tutorials/ImplementASolver/#The-high-level-interface","page":"Implement a solver","title":"The high level interface","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Starting from the last section, the usual call a user would prefer is just passing a manifold M the cost f and maybe a start point p.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"function random_walk_algorithm!(M::AbstractManifold, f, p=rand(M); kwargs...)\n mgo = ManifoldCostObjective(f)\n return random_walk_algorithm!(M, mgo, p; kwargs...)\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"random_walk_algorithm! (generic function with 3 methods)","category":"page"},{"location":"tutorials/ImplementASolver/#Ease-of-Use-II:-the-state-summary","page":"Implement a solver","title":"Ease of Use II: the state summary","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"For the case that you set return_state=true the solver should return a summary of the run. When a show method is provided, users can easily read such summary in a terminal. It should reflect its main parameters, if they are not too verbose and provide information about the reason it stopped and whether this indicates convergence.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Here it would for example look like","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"import Base: show\nfunction show(io::IO, rws::RandomWalkState)\n i = get_count(rws, :Iterations)\n Iter = (i > 0) ? \"After $i iterations\\n\" : \"\"\n Conv = indicates_convergence(rws.stop) ? \"Yes\" : \"No\"\n s = \"\"\"\n # Solver state for `Manopt.jl`s Tutorial Random Walk\n $Iter\n ## Parameters\n * retraction method: $(rws.retraction_method)\n * σ : $(rws.σ)\n\n ## Stopping criterion\n\n $(status_summary(rws.stop))\n This indicates convergence: $Conv\"\"\"\n return print(io, s)\nend","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Now the algorithm can be easily called and provides all features of a Manopt.jl algorithm. For example to see the summary, we could now just call","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"q = random_walk_algorithm!(M, f; return_state=true)","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"# Solver state for `Manopt.jl`s Tutorial Random Walk\nAfter 200 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n* σ : 0.1\n\n## Stopping criterion\n\nMax Iteration 200: reached\nThis indicates convergence: No","category":"page"},{"location":"tutorials/ImplementASolver/#Conclusion-and-beyond","page":"Implement a solver","title":"Conclusion & beyond","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"We saw in this tutorial how to implement a simple cost-based algorithm, to illustrate how optimization algorithms are covered in Manopt.jl.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"One feature we did not cover is that most algorithms allow for in-place and allocation functions, as soon as they work on more than just the cost, for example use gradients, proximal maps or Hessians. This is usually a keyword argument of the objective and hence also part of the high-level interfaces.","category":"page"},{"location":"tutorials/ImplementASolver/#Technical-details","page":"Implement a solver","title":"Technical details","text":"","category":"section"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/ImplementASolver/","page":"Implement a solver","title":"Implement a solver","text":"2024-11-21T20:38:57.087","category":"page"},{"location":"tutorials/HowToDebug/#How-to-print-debug-output","page":"Print debug output","title":"How to print debug output","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"This tutorial aims to illustrate how to perform debug output. For that we consider an example that includes a subsolver, to also consider their debug capabilities.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"The problem itself is hence not the main focus. We consider a nonnegative PCA which we can write as a constraint problem on the Sphere","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Let’s first load the necessary packages.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"using Manopt, Manifolds, Random, LinearAlgebra\nRandom.seed!(42);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"d = 4\nM = Sphere(d - 1)\nv0 = project(M, [ones(2)..., zeros(d - 2)...])\nZ = v0 * v0'\n#Cost and gradient\nf(M, p) = -tr(transpose(p) * Z * p) / 2\ngrad_f(M, p) = project(M, p, -transpose.(Z) * p / 2 - Z * p / 2)\n# Constraints\ng(M, p) = -p # now p ≥ 0\nmI = -Matrix{Float64}(I, d, d)\n# Vector of gradients of the constraint components\ngrad_g(M, p) = [project(M, p, mI[:, i]) for i in 1:d]","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Then we can take a starting point","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p0 = project(M, [ones(2)..., zeros(d - 3)..., 0.1])","category":"page"},{"location":"tutorials/HowToDebug/#Simple-debug-output","page":"Print debug output","title":"Simple debug output","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Any solver accepts the keyword debug=, which in the simplest case can be set to an array of strings, symbols and a number.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Strings are printed in every iteration as is (cf. DebugDivider) and should be used to finish the array with a line break.\nthe last number in the array is used with DebugEvery to print the debug only every ith iteration.\nAny Symbol is converted into certain debug prints","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Certain symbols starting with a capital letter are mapped to certain prints, for example :Cost is mapped to DebugCost() to print the current cost function value. A full list is provided in the DebugActionFactory. A special keyword is :Stop, which is only added to the final debug hook to print the stopping criterion.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Any symbol with a small letter is mapped to fields of the AbstractManoptSolverState which is used. This way you can easily print internal data, if you know their names.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Let’s look at an example first: if we want to print the current iteration number, the current cost function value as well as the value ϵ from the ExactPenaltyMethodState. To keep the amount of print at a reasonable level, we want to only print the debug every twenty-fifth iteration.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Then we can write","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p1 = exact_penalty_method(\n M, f, grad_f, p0; g=g, grad_g=grad_g,\n debug = [:Iteration, :Cost, \" | \", (:ϵ,\"ϵ: %.8f\"), 25, \"\\n\", :Stop]\n);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Initial f(x): -0.497512 | ϵ: 0.00100000\n# 25 f(x): -0.499449 | ϵ: 0.00017783\n# 50 f(x): -0.499996 | ϵ: 0.00003162\n# 75 f(x): -0.500000 | ϵ: 0.00000562\n# 100 f(x): -0.500000 | ϵ: 0.00000100\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.","category":"page"},{"location":"tutorials/HowToDebug/#Specifying-when-to-print-something","page":"Print debug output","title":"Specifying when to print something","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"While in the last step, we specified what to print, this can be extend to even specify when to print it. Currently the following four “places” are available, ordered by when they appear in an algorithm run.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":":Start to print something at the start of the algorithm. At this place all other (the following) places are “reset”, by triggering each of them with an iteration number 0\n:BeforeIteration to print something before an iteration starts\n:Iteration to print something after an iteration. For example the group of prints from the last code block [:Iteration, :Cost, \" | \", :ϵ, 25,] is added to this entry.\n:Stop to print something when the algorithm stops. In the example, the :Stop adds the DebugStoppingCriterion is added to this place.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Specifying something especially for one of these places is done by specifying a Pair, so for example :BeforeIteration => :Iteration would add the display of the iteration number to be printed before the iteration is performed.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Changing this in the run does not change the output. Being more precise for the other entries, we could also write","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p1 = exact_penalty_method(\n M, f, grad_f, p0; g=g, grad_g=grad_g,\n debug = [\n :BeforeIteration => [:Iteration],\n :Iteration => [:Cost, \" | \", :ϵ, \"\\n\"],\n :Stop => DebugStoppingCriterion(),\n 25,\n ],\n);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Initial f(x): -0.497512 | ϵ: 0.001\n# 25 f(x): -0.499449 | ϵ: 0.0001778279410038921\n# 50 f(x): -0.499996 | ϵ: 3.1622776601683734e-5\n# 75 f(x): -0.500000 | ϵ: 5.623413251903474e-6\n# 100 f(x): -0.500000 | ϵ: 1.0e-6\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"This also illustrates, that instead of Symbols we can also always pass down a DebugAction directly, for example when there is a reason to create or configure the action more individually than the default from the symbol. Note that the number (25) yields that all but :Start and :Stop are only displayed every twenty-fifth iteration.","category":"page"},{"location":"tutorials/HowToDebug/#Subsolver-debug","page":"Print debug output","title":"Subsolver debug","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Subsolvers have a sub_kwargs keyword, such that you can pass keywords to the sub solver as well. This works well if you do not plan to change the subsolver. If you do you can wrap your own solver_state= argument in a decorate_state! and pass a debug= password to this function call. Keywords in a keyword have to be passed as pairs (:debug => [...]).","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"For most debugs, there further exists a longer form to specify the format to print. We want to use this to specify the format to print ϵ. This is done by putting the corresponding symbol together with the string to use in formatting into a tuple like (:ϵ,\" | ϵ: %.8f\"), where we can already include the divider as well.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"A main problem now is, that this debug is issued every sub solver call or initialisation, as the following print of just a . per sub solver test/call illustrates","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p3 = exact_penalty_method(\n M, f, grad_f, p0; g=g, grad_g=grad_g,\n debug = [\"\\n\",:Iteration, DebugCost(), (:ϵ,\" | ϵ: %.8f\"), 25, \"\\n\", :Stop],\n sub_kwargs = [:debug => [\".\"]]\n);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Initial f(x): -0.497512 | ϵ: 0.00100000\n....................................................................................\n# 25 f(x): -0.499449 | ϵ: 0.00017783\n.......................................................................\n# 50 f(x): -0.499996 | ϵ: 0.00003162\n..................................................\n# 75 f(x): -0.500000 | ϵ: 0.00000562\n..................................................\n# 100 f(x): -0.500000 | ϵ: 0.00000100\n....The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"The different lengths of the dotted lines come from the fact that —at least in the beginning— the subsolver performs a few steps and each subsolvers step prints a dot.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"For this issue, there is the next symbol (similar to the :Stop) to indicate that a debug set is a subsolver set :WhenActive, which introduces a DebugWhenActive that is only activated when the outer debug is actually active, or inother words DebugEvery is active itself. Furthermore, we want to print the iteration number before printing the subsolvers steps, so we put this into a Pair, but we can leave the remaining ones as single entries. Finally we also prefix :Stop with \" | \" and print the iteration number at the time we stop. We get","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p4 = exact_penalty_method(\n M,\n f,\n grad_f,\n p0;\n g=g,\n grad_g=grad_g,\n debug=[\n :BeforeIteration => [:Iteration, \"\\n\"],\n :Iteration => [DebugCost(), (:ϵ, \" | ϵ: %.8f\"), \"\\n\"],\n :Stop,\n 25,\n ],\n sub_kwargs=[\n :debug => [\n \" | \",\n :Iteration,\n :Cost,\n \"\\n\",\n :WhenActive,\n :Stop => [(:Stop, \" | \"), \" | stopped after iteration \", :Iteration, \"\\n\"],\n ],\n ],\n);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Initial \nf(x): -0.497512 | ϵ: 0.00100000\n | Initial f(x): -0.498944\n | # 1 f(x): -0.498969\n | The algorithm reached approximately critical point after 1 iterations; the gradient norm (3.4995246389869776e-5) is less than 0.001.\n | stopped after iteration # 1 \n# 25 \nf(x): -0.499449 | ϵ: 0.00017783\n | Initial f(x): -0.499992\n | # 1 f(x): -0.499992\n | # 2 f(x): -0.499992\n | The algorithm reached approximately critical point after 2 iterations; the gradient norm (0.00027436723916614346) is less than 0.001.\n | stopped after iteration # 2 \n# 50 \nf(x): -0.499996 | ϵ: 0.00003162\n | Initial f(x): -0.500000\n | # 1 f(x): -0.500000\n | The algorithm reached approximately critical point after 1 iterations; the gradient norm (5.000404403277298e-6) is less than 0.001.\n | stopped after iteration # 1 \n# 75 \nf(x): -0.500000 | ϵ: 0.00000562\n | Initial f(x): -0.500000\n | # 1 f(x): -0.500000\n | The algorithm reached approximately critical point after 1 iterations; the gradient norm (4.202215558182483e-6) is less than 0.001.\n | stopped after iteration # 1 \n# 100 \nf(x): -0.500000 | ϵ: 0.00000100\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"where we now see that the subsolver always only requires one step. Note that since debug of an iteration is happening after a step, we see the sub solver run before the debug for an iteration number.","category":"page"},{"location":"tutorials/HowToDebug/#Advanced-debug-output","page":"Print debug output","title":"Advanced debug output","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"There is two more advanced variants that can be used. The first is a tuple of a symbol and a string, where the string is used as the format print, that most DebugActions have. The second is, to directly provide a DebugAction.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"We can for example change the way the :ϵ is printed by adding a format string and use DebugCost() which is equivalent to using :Cost. Especially with the format change, the lines are more consistent in length.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"p2 = exact_penalty_method(\n M, f, grad_f, p0; g=g, grad_g=grad_g,\n debug = [:Iteration, DebugCost(), (:ϵ,\" | ϵ: %.8f\"), 25, \"\\n\", :Stop]\n);","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Initial f(x): -0.497512 | ϵ: 0.00100000\n# 25 f(x): -0.499449 | ϵ: 0.00017783\n# 50 f(x): -0.499996 | ϵ: 0.00003162\n# 75 f(x): -0.500000 | ϵ: 0.00000562\n# 100 f(x): -0.500000 | ϵ: 0.00000100\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (4.2533629774851707e-7) less than 1.0e-6.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"You can also write your own DebugAction functor, where the function to implement has the same signature as the step function, that is an AbstractManoptProblem, an AbstractManoptSolverState, as well as the current iterate. For example the already mentionedDebugDivider(s) is given as","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"mutable struct DebugDivider{TIO<:IO} <: DebugAction\n io::TIO\n divider::String\n DebugDivider(divider=\" | \"; io::IO=stdout) = new{typeof(io)}(io, divider)\nend\nfunction (d::DebugDivider)(::AbstractManoptProblem, ::AbstractManoptSolverState, k::Int)\n (k >= 0) && (!isempty(d.divider)) && (print(d.io, d.divider))\n return nothing\nend","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"or you could implement that of course just for your specific problem or state.","category":"page"},{"location":"tutorials/HowToDebug/#Technical-details","page":"Print debug output","title":"Technical details","text":"","category":"section"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/HowToDebug/","page":"Print debug output","title":"Print debug output","text":"2024-11-21T20:38:05.714","category":"page"},{"location":"solvers/particle_swarm/#Particle-swarm-optimization","page":"Particle Swarm Optimization","title":"Particle swarm optimization","text":"","category":"section"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":" particle_swarm\n particle_swarm!","category":"page"},{"location":"solvers/particle_swarm/#Manopt.particle_swarm","page":"Particle Swarm Optimization","title":"Manopt.particle_swarm","text":"patricle_swarm(M, f; kwargs...)\npatricle_swarm(M, f, swarm; kwargs...)\npatricle_swarm(M, mco::AbstractManifoldCostObjective; kwargs..)\npatricle_swarm(M, mco::AbstractManifoldCostObjective, swarm; kwargs..)\nparticle_swarm!(M, f, swarm; kwargs...)\nparticle_swarm!(M, mco::AbstractManifoldCostObjective, swarm; kwargs..)\n\nperform the particle swarm optimization algorithm (PSO) to solve\n\noperatornameargmin_p mathcal M f(p)\n\nPSO starts with an initial swarm [BIA10] of points on the manifold. If no swarm is provided, the swarm_size keyword is used to generate random points. The computation can be perfomed in-place of swarm.\n\nTo this end, a swarm S = s_1 ldots s_n of particles is moved around the manifold M in the following manner. For every particle s_k^(i) the new particle velocities X_k^(i) are computed in every step i of the algorithm by\n\nX_k^(i) = ω mathcal T_s_k^(i)s_k^(i-1) X_k^(i-1) + c r_1 operatornameretr^-1_s_k^(i)(p_k^(i)) + s r_2 operatornameretr^-1_s_k^(i)(p)\n\nwhere\n\ns_k^(i) is the current particle position,\nω denotes the inertia,\nc and s are a cognitive and a social weight, respectively,\nr_j, j=12 are random factors which are computed new for each particle and step\n\\mathcal T_{⋅←⋅} is a vector transport, and\n\\operatorname{retr}^{-1} is an inverse retraction\n\nThen the position of the particle is updated as\n\ns_k^(i+1) = operatornameretr_s_k^(i)(X_k^(i))\n\nThen the single particles best entries p_k^(i) are updated as\n\np_k^(i+1) = begincases\ns_k^(i+1) textif F(s_k^(i+1))F(p_k^(i))\np_k^(i) textelse\nendcases\n\nand the global best position\n\ng^(i+1) = begincases\np_k^(i+1) textif F(p_k^(i+1))F(g_k^(i))\ng_k^(i) textelse\nendcases\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\nswarm = [rand(M) for _ in 1:swarm_size]: an initial swarm of points.\n\nInstead of a cost function f you can also provide an AbstractManifoldCostObjective mco.\n\nKeyword Arguments\n\ncognitive_weight=1.4: a cognitive weight factor\ninertia=0.65: the inertia of the particles\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nsocial_weight=1.4: a social weight factor\nswarm_size=100: swarm size, if it should be generated randomly\nstopping_criterion=StopAfterIteration(500)|StopWhenChangeLess(1e-4): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nvelocity: a set of tangent vectors (of type AbstractVector{T}) representing the velocities of the particles, per default a random tangent vector per initial position\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively. If you provide the objective directly, these decorations can still be specified\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/particle_swarm/#Manopt.particle_swarm!","page":"Particle Swarm Optimization","title":"Manopt.particle_swarm!","text":"patricle_swarm(M, f; kwargs...)\npatricle_swarm(M, f, swarm; kwargs...)\npatricle_swarm(M, mco::AbstractManifoldCostObjective; kwargs..)\npatricle_swarm(M, mco::AbstractManifoldCostObjective, swarm; kwargs..)\nparticle_swarm!(M, f, swarm; kwargs...)\nparticle_swarm!(M, mco::AbstractManifoldCostObjective, swarm; kwargs..)\n\nperform the particle swarm optimization algorithm (PSO) to solve\n\noperatornameargmin_p mathcal M f(p)\n\nPSO starts with an initial swarm [BIA10] of points on the manifold. If no swarm is provided, the swarm_size keyword is used to generate random points. The computation can be perfomed in-place of swarm.\n\nTo this end, a swarm S = s_1 ldots s_n of particles is moved around the manifold M in the following manner. For every particle s_k^(i) the new particle velocities X_k^(i) are computed in every step i of the algorithm by\n\nX_k^(i) = ω mathcal T_s_k^(i)s_k^(i-1) X_k^(i-1) + c r_1 operatornameretr^-1_s_k^(i)(p_k^(i)) + s r_2 operatornameretr^-1_s_k^(i)(p)\n\nwhere\n\ns_k^(i) is the current particle position,\nω denotes the inertia,\nc and s are a cognitive and a social weight, respectively,\nr_j, j=12 are random factors which are computed new for each particle and step\n\\mathcal T_{⋅←⋅} is a vector transport, and\n\\operatorname{retr}^{-1} is an inverse retraction\n\nThen the position of the particle is updated as\n\ns_k^(i+1) = operatornameretr_s_k^(i)(X_k^(i))\n\nThen the single particles best entries p_k^(i) are updated as\n\np_k^(i+1) = begincases\ns_k^(i+1) textif F(s_k^(i+1))F(p_k^(i))\np_k^(i) textelse\nendcases\n\nand the global best position\n\ng^(i+1) = begincases\np_k^(i+1) textif F(p_k^(i+1))F(g_k^(i))\ng_k^(i) textelse\nendcases\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\nswarm = [rand(M) for _ in 1:swarm_size]: an initial swarm of points.\n\nInstead of a cost function f you can also provide an AbstractManifoldCostObjective mco.\n\nKeyword Arguments\n\ncognitive_weight=1.4: a cognitive weight factor\ninertia=0.65: the inertia of the particles\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nsocial_weight=1.4: a social weight factor\nswarm_size=100: swarm size, if it should be generated randomly\nstopping_criterion=StopAfterIteration(500)|StopWhenChangeLess(1e-4): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nvelocity: a set of tangent vectors (of type AbstractVector{T}) representing the velocities of the particles, per default a random tangent vector per initial position\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively. If you provide the objective directly, these decorations can still be specified\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/particle_swarm/#State","page":"Particle Swarm Optimization","title":"State","text":"","category":"section"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"ParticleSwarmState","category":"page"},{"location":"solvers/particle_swarm/#Manopt.ParticleSwarmState","page":"Particle Swarm Optimization","title":"Manopt.ParticleSwarmState","text":"ParticleSwarmState{P,T} <: AbstractManoptSolverState\n\nDescribes a particle swarm optimizing algorithm, with\n\nFields\n\ncognitive_weight: a cognitive weight factor\ninertia: the inertia of the particles\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nsocial_weight: a social weight factor\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nvelocity: a set of tangent vectors (of type AbstractVector{T}) representing the velocities of the particles\n\nInternal and temporary fields\n\ncognitive_vector: temporary storage for a tangent vector related to cognitive_weight\np::P: a point on the manifold mathcal M storing the best point visited by all particles\npositional_best: storing the best position p_i every single swarm participant visited\nq::P: a point on the manifold mathcal M serving as temporary storage for interims results; avoids allocations\nsocial_vec: temporary storage for a tangent vector related to social_weight\nswarm: a set of points (of type AbstractVector{P}) on a manifold a_i_i=1^N\n\nConstructor\n\nParticleSwarmState(M, initial_swarm, velocity; kawrgs...)\n\nconstruct a particle swarm solver state for the manifold M starting with the initial population initial_swarm with velocities. The p used in the following defaults is the type of one point from the swarm.\n\nKeyword arguments\n\ncognitive_weight=1.4\ninertia=0.65\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nsocial_weight=1.4\nstopping_criterion=StopAfterIteration(500)|StopWhenChangeLess(1e-4): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nSee also\n\nparticle_swarm\n\n\n\n\n\n","category":"type"},{"location":"solvers/particle_swarm/#Stopping-criteria","page":"Particle Swarm Optimization","title":"Stopping criteria","text":"","category":"section"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"StopWhenSwarmVelocityLess","category":"page"},{"location":"solvers/particle_swarm/#Manopt.StopWhenSwarmVelocityLess","page":"Particle Swarm Optimization","title":"Manopt.StopWhenSwarmVelocityLess","text":"StopWhenSwarmVelocityLess <: StoppingCriterion\n\nStopping criterion for particle_swarm, when the velocity of the swarm is less than a threshold.\n\nFields\n\nthreshold: the threshold\nat_iteration: store the iteration the stopping criterion was (last) fulfilled\nreason: store the reason why the stopping criterion was fulfilled, see get_reason\nvelocity_norms: interim vector to store the norms of the velocities before computing its norm\n\nConstructor\n\nStopWhenSwarmVelocityLess(tolerance::Float64)\n\ninitialize the stopping criterion to a certain tolerance.\n\n\n\n\n\n","category":"type"},{"location":"solvers/particle_swarm/#sec-arc-technical-details","page":"Particle Swarm Optimization","title":"Technical details","text":"","category":"section"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"The particle_swarm solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= does not have to be specified.\nBy default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nTangent vectors storing the social and cognitive vectors are initialized calling zero_vector(M,p).\nA `copyto!(M, q, p) and copy(M,p) for points.\nThe distance(M, p, q) when using the default stopping criterion, which uses StopWhenChangeLess.","category":"page"},{"location":"solvers/particle_swarm/#Literature","page":"Particle Swarm Optimization","title":"Literature","text":"","category":"section"},{"location":"solvers/particle_swarm/","page":"Particle Swarm Optimization","title":"Particle Swarm Optimization","text":"P. B. Borckmans, M. Ishteva and P.-A. Absil. A Modified Particle Swarm Optimization Algorithm for the Best Low Multilinear Rank Approximation of Higher-Order Tensors. In: 7th International Conference on Swarm INtelligence (Springer Berlin Heidelberg, 2010); pp. 13–23.\n\n\n\n","category":"page"},{"location":"solvers/stochastic_gradient_descent/#Stochastic-gradient-descent","page":"Stochastic Gradient Descent","title":"Stochastic gradient descent","text":"","category":"section"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"stochastic_gradient_descent\nstochastic_gradient_descent!","category":"page"},{"location":"solvers/stochastic_gradient_descent/#Manopt.stochastic_gradient_descent","page":"Stochastic Gradient Descent","title":"Manopt.stochastic_gradient_descent","text":"stochastic_gradient_descent(M, grad_f, p=rand(M); kwargs...)\nstochastic_gradient_descent(M, msgo; kwargs...)\nstochastic_gradient_descent!(M, grad_f, p; kwargs...)\nstochastic_gradient_descent!(M, msgo, p; kwargs...)\n\nperform a stochastic gradient descent. This can be perfomed in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\ngrad_f: a gradient function, that either returns a vector of the gradients or is a vector of gradient functions\np: a point on the manifold mathcal M\n\nalternatively to the gradient you can provide an ManifoldStochasticGradientObjective msgo, then using the cost= keyword does not have any effect since if so, the cost is already within the objective.\n\nKeyword arguments\n\ncost=missing: you can provide a cost function for example to track the function value\ndirection=StochasticGradient([zerovector](@extrefManifoldsBase.zerovector-Tuple{AbstractManifold, Any})(M, p)`)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Random: specify whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Linear) or the default :Random one.\norder_type=:RandomOder: a type of ordering of gradient evaluations. Possible values are :RandomOrder, a :FixedPermutation, :LinearOrder\nstopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, StochasticGradientDescentState): a functor inheriting from Stepsize to determine a step size\norder=[1:n]: the initial permutation, where n is the number of gradients in gradF.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/stochastic_gradient_descent/#Manopt.stochastic_gradient_descent!","page":"Stochastic Gradient Descent","title":"Manopt.stochastic_gradient_descent!","text":"stochastic_gradient_descent(M, grad_f, p=rand(M); kwargs...)\nstochastic_gradient_descent(M, msgo; kwargs...)\nstochastic_gradient_descent!(M, grad_f, p; kwargs...)\nstochastic_gradient_descent!(M, msgo, p; kwargs...)\n\nperform a stochastic gradient descent. This can be perfomed in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\ngrad_f: a gradient function, that either returns a vector of the gradients or is a vector of gradient functions\np: a point on the manifold mathcal M\n\nalternatively to the gradient you can provide an ManifoldStochasticGradientObjective msgo, then using the cost= keyword does not have any effect since if so, the cost is already within the objective.\n\nKeyword arguments\n\ncost=missing: you can provide a cost function for example to track the function value\ndirection=StochasticGradient([zerovector](@extrefManifoldsBase.zerovector-Tuple{AbstractManifold, Any})(M, p)`)\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Random: specify whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Linear) or the default :Random one.\norder_type=:RandomOder: a type of ordering of gradient evaluations. Possible values are :RandomOrder, a :FixedPermutation, :LinearOrder\nstopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, StochasticGradientDescentState): a functor inheriting from Stepsize to determine a step size\norder=[1:n]: the initial permutation, where n is the number of gradients in gradF.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/stochastic_gradient_descent/#State","page":"Stochastic Gradient Descent","title":"State","text":"","category":"section"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"StochasticGradientDescentState\nManopt.default_stepsize(::AbstractManifold, ::Type{StochasticGradientDescentState})","category":"page"},{"location":"solvers/stochastic_gradient_descent/#Manopt.StochasticGradientDescentState","page":"Stochastic Gradient Descent","title":"Manopt.StochasticGradientDescentState","text":"StochasticGradientDescentState <: AbstractGradientDescentSolverState\n\nStore the following fields for a default stochastic gradient descent algorithm, see also ManifoldStochasticGradientObjective and stochastic_gradient_descent.\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\ndirection: a direction update to use\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nevaluation_order: specify whether to use a randomly permuted sequence (:FixedRandom:), a per cycle permuted sequence (:Linear) or the default, a :Random sequence.\norder: stores the current permutation\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\n\nConstructor\n\nStochasticGradientDescentState(M::AbstractManifold; kwargs...)\n\nCreate a StochasticGradientDescentState with start point p.\n\nKeyword arguments\n\ndirection=StochasticGradientRule(M, [zerovector](@extrefManifoldsBase.zerovector-Tuple{AbstractManifold, Any})(M, p)`)\norder_type=:RandomOrder`\norder=Int[]: specify how to store the order of indices for the next epoche\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, StochasticGradientDescentState): a functor inheriting from Stepsize to determine a step size\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\n\n\n\n\n","category":"type"},{"location":"solvers/stochastic_gradient_descent/#Manopt.default_stepsize-Tuple{AbstractManifold, Type{StochasticGradientDescentState}}","page":"Stochastic Gradient Descent","title":"Manopt.default_stepsize","text":"default_stepsize(M::AbstractManifold, ::Type{StochasticGradientDescentState})\n\nDeinfe the default step size computed for the StochasticGradientDescentState, which is ConstantStepsizeM.\n\n\n\n\n\n","category":"method"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"Additionally, the options share a DirectionUpdateRule, so you can also apply MomentumGradient and AverageGradient here. The most inner one should always be.","category":"page"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"StochasticGradient","category":"page"},{"location":"solvers/stochastic_gradient_descent/#Manopt.StochasticGradient","page":"Stochastic Gradient Descent","title":"Manopt.StochasticGradient","text":"StochasticGradient(; kwargs...)\nStochasticGradient(M::AbstractManifold; kwargs...)\n\nKeyword arguments\n\ninitial_gradient=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\np=rand(M): a point on the manifold mathcal Mto specify the initial value\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for StochasticGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"which internally uses","category":"page"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"AbstractGradientGroupDirectionRule\nStochasticGradientRule","category":"page"},{"location":"solvers/stochastic_gradient_descent/#Manopt.AbstractGradientGroupDirectionRule","page":"Stochastic Gradient Descent","title":"Manopt.AbstractGradientGroupDirectionRule","text":"AbstractStochasticGradientDescentSolverState <: AbstractManoptSolverState\n\nA generic type for all options related to gradient descent methods working with parts of the total gradient\n\n\n\n\n\n","category":"type"},{"location":"solvers/stochastic_gradient_descent/#Manopt.StochasticGradientRule","page":"Stochastic Gradient Descent","title":"Manopt.StochasticGradientRule","text":"StochasticGradientRule<: AbstractGradientGroupDirectionRule\n\nCreate a functor (problem, state k) -> (s,X) to evaluate the stochatsic gradient, that is chose a random index from the state and use the internal field for evaluation of the gradient in-place.\n\nThe default gradient processor, which just evaluates the (stochastic) gradient or a subset thereof.\n\nFields\n\nX::T: a tangent vector at the point p on the manifold mathcal M\n\nConstructor\n\nStochasticGradientRule(M::AbstractManifold; p=rand(M), X=zero_vector(M, p))\n\nInitialize the stochastic gradient processor with tangent vector type of X, where both M and p are just help variables.\n\nSee also\n\nstochastic_gradient_descent, [StochasticGradient])@ref)\n\n\n\n\n\n","category":"type"},{"location":"solvers/stochastic_gradient_descent/#sec-sgd-technical-details","page":"Stochastic Gradient Descent","title":"Technical details","text":"","category":"section"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"The stochastic_gradient_descent solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/stochastic_gradient_descent/","page":"Stochastic Gradient Descent","title":"Stochastic Gradient Descent","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.","category":"page"},{"location":"solvers/proximal_bundle_method/#Proximal-bundle-method","page":"Proximal bundle method","title":"Proximal bundle method","text":"","category":"section"},{"location":"solvers/proximal_bundle_method/","page":"Proximal bundle method","title":"Proximal bundle method","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/proximal_bundle_method/","page":"Proximal bundle method","title":"Proximal bundle method","text":"proximal_bundle_method\nproximal_bundle_method!","category":"page"},{"location":"solvers/proximal_bundle_method/#Manopt.proximal_bundle_method","page":"Proximal bundle method","title":"Manopt.proximal_bundle_method","text":"proximal_bundle_method(M, f, ∂f, p=rand(M), kwargs...)\nproximal_bundle_method!(M, f, ∂f, p, kwargs...)\n\nperform a proximal bundle method p^(k+1) = operatornameretr_p^(k)(-d_k), where operatornameretr is a retraction and\n\nd_k = frac1mu_k sum_jin J_k λ_j^k mathrmP_p_kq_jX_q_j\n\nwith X_q_j f(q_j), p_k the last serious iterate, mu_k a proximal parameter, and the λ_j^k as solutions to the quadratic subproblem provided by the sub solver, see for example the proximal_bundle_method_subsolver.\n\nThough the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.\n\nFor more details see [HNP23].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nα₀=1.2: initialization value for α, used to update η\nbundle_size=50: the maximal size of the bundle\nδ=1.0: parameter for updating μ: if δ 0 then μ = log(i + 1), else μ += δ μ\nε=1e-2: stepsize-like parameter related to the injectivity radius of the manifold\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nm=0.0125: a real number that controls the decrease of the cost function\nμ=0.5: initial proximal parameter for the subproblem\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nsub_problem=proximal_bundle_method_subsolver`: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=AllocatingEvaluation: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/proximal_bundle_method/#Manopt.proximal_bundle_method!","page":"Proximal bundle method","title":"Manopt.proximal_bundle_method!","text":"proximal_bundle_method(M, f, ∂f, p=rand(M), kwargs...)\nproximal_bundle_method!(M, f, ∂f, p, kwargs...)\n\nperform a proximal bundle method p^(k+1) = operatornameretr_p^(k)(-d_k), where operatornameretr is a retraction and\n\nd_k = frac1mu_k sum_jin J_k λ_j^k mathrmP_p_kq_jX_q_j\n\nwith X_q_j f(q_j), p_k the last serious iterate, mu_k a proximal parameter, and the λ_j^k as solutions to the quadratic subproblem provided by the sub solver, see for example the proximal_bundle_method_subsolver.\n\nThough the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.\n\nFor more details see [HNP23].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\n∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nα₀=1.2: initialization value for α, used to update η\nbundle_size=50: the maximal size of the bundle\nδ=1.0: parameter for updating μ: if δ 0 then μ = log(i + 1), else μ += δ μ\nε=1e-2: stepsize-like parameter related to the injectivity radius of the manifold\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nm=0.0125: a real number that controls the decrease of the cost function\nμ=0.5: initial proximal parameter for the subproblem\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nsub_problem=proximal_bundle_method_subsolver`: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=AllocatingEvaluation: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/proximal_bundle_method/#State","page":"Proximal bundle method","title":"State","text":"","category":"section"},{"location":"solvers/proximal_bundle_method/","page":"Proximal bundle method","title":"Proximal bundle method","text":"ProximalBundleMethodState","category":"page"},{"location":"solvers/proximal_bundle_method/#Manopt.ProximalBundleMethodState","page":"Proximal bundle method","title":"Manopt.ProximalBundleMethodState","text":"ProximalBundleMethodState <: AbstractManoptSolverState\n\nstores option values for a proximal_bundle_method solver.\n\nFields\n\nα: curvature-dependent parameter used to update η\nα₀: initialization value for α, used to update η\napprox_errors: approximation of the linearization errors at the last serious step\nbundle: bundle that collects each iterate with the computed subgradient at the iterate\nbundle_size: the maximal size of the bundle\nc: convex combination of the approximation errors\nd: descent direction\nδ: parameter for updating μ: if δ 0 then μ = log(i + 1), else μ += δ μ\nε: stepsize-like parameter related to the injectivity radius of the manifold\nη: curvature-dependent term for updating the approximation errors\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nλ: convex coefficients that solve the subproblem\nm: the parameter to test the decrease of the cost\nμ: (initial) proximal parameter for the subproblem\nν: the stopping parameter given by ν = - μ d^2 - c\np::P: a point on the manifold mathcal Mstoring the current iterate\np_last_serious: last serious iterate\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\ntransported_subgradients: subgradients of the bundle that are transported to p_last_serious\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring a subgradient at the current iterate\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\n\nConstructor\n\nProximalBundleMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)\nProximalBundleMethodState(M::AbstractManifold, sub_problem=proximal_bundle_method_subsolver; evaluation=AllocatingEvaluation(), kwargs...)\n\nGenerate the state for the proximal_bundle_method on the manifold M\n\nKeyword arguments\n\nα₀=1.2\nbundle_size=50\nδ=1.0\nε=1e-2\nμ=0.5\nm=0.0125\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled\nsub_problem=proximal_bundle_method_subsolver`: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state=AllocatingEvaluation: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nX=zero_vector(M, p) specify the type of tangent vector to use.\n\n\n\n\n\n","category":"type"},{"location":"solvers/proximal_bundle_method/#Helpers-and-internal-functions","page":"Proximal bundle method","title":"Helpers and internal functions","text":"","category":"section"},{"location":"solvers/proximal_bundle_method/","page":"Proximal bundle method","title":"Proximal bundle method","text":"proximal_bundle_method_subsolver","category":"page"},{"location":"solvers/proximal_bundle_method/#Manopt.proximal_bundle_method_subsolver","page":"Proximal bundle method","title":"Manopt.proximal_bundle_method_subsolver","text":"λ = proximal_bundle_method_subsolver(M, p_last_serious, μ, approximation_errors, transported_subgradients)\nproximal_bundle_method_subsolver!(M, λ, p_last_serious, μ, approximation_errors, transported_subgradients)\n\nsolver for the subproblem of the proximal bundle method.\n\nThe subproblem for the proximal bundle method is\n\nbeginalign*\n operatorname*argmin_λ ℝ^lvert L_lrvert \n frac12 mu_l BigllVert sum_j L_l λ_j mathrmP_p_kq_j X_q_j BigrrVert^2\n + sum_j L_l λ_j c_j^k\n \n texts t quad \n sum_j L_l λ_j = 1\n quad λ_j 0\n quad textfor all j L_l\nendalign*\n\nwhere L_l = k if q_k is a serious iterate, and L_l = L_l-1 cup k otherwise. See [HNP23].\n\ntip: Tip\nA default subsolver based on RipQP.jl and QuadraticModels is available if these two packages are loaded.\n\n\n\n\n\n","category":"function"},{"location":"solvers/proximal_bundle_method/#Literature","page":"Proximal bundle method","title":"Literature","text":"","category":"section"},{"location":"solvers/proximal_bundle_method/","page":"Proximal bundle method","title":"Proximal bundle method","text":"N. Hoseini Monjezi, S. Nobakhtian and M. R. Pouryayevali. A proximal bundle algorithm for nonsmooth optimization on Riemannian manifolds. IMA Journal of Numerical Analysis 43, 293–325 (2023).\n\n\n\n","category":"page"},{"location":"solvers/cyclic_proximal_point/#Cyclic-proximal-point","page":"Cyclic Proximal Point","title":"Cyclic proximal point","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"The Cyclic Proximal Point (CPP) algorithm aims to minimize","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"F(x) = sum_i=1^c f_i(x)","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"assuming that the proximal maps operatornameprox_λ f_i(x) are given in closed form or can be computed efficiently (at least approximately).","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"The algorithm then cycles through these proximal maps, where the type of cycle might differ and the proximal parameter λ_k changes after each cycle k.","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"For a convergence result on Hadamard manifolds see Bačák [Bac14].","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"cyclic_proximal_point\ncyclic_proximal_point!","category":"page"},{"location":"solvers/cyclic_proximal_point/#Manopt.cyclic_proximal_point","page":"Cyclic Proximal Point","title":"Manopt.cyclic_proximal_point","text":"cyclic_proximal_point(M, f, proxes_f, p; kwargs...)\ncyclic_proximal_point(M, mpo, p; kwargs...)\ncyclic_proximal_point!(M, f, proxes_f; kwargs...)\ncyclic_proximal_point!(M, mpo; kwargs...)\n\nperform a cyclic proximal point algorithm. This can be done in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal Mℝ to minimize\nproxes_f: an Array of proximal maps (Functions) (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of f (see evaluation)\n\nwhere f and the proximal maps proxes_f can also be given directly as a ManifoldProximalMapObjective mpo\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Random) or the default linear one.\nλ=iter -> 1/iter: a function returning the (square summable but not summable) sequence of λ_i\nstopping_criterion=StopAfterIteration(5000)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/cyclic_proximal_point/#Manopt.cyclic_proximal_point!","page":"Cyclic Proximal Point","title":"Manopt.cyclic_proximal_point!","text":"cyclic_proximal_point(M, f, proxes_f, p; kwargs...)\ncyclic_proximal_point(M, mpo, p; kwargs...)\ncyclic_proximal_point!(M, f, proxes_f; kwargs...)\ncyclic_proximal_point!(M, mpo; kwargs...)\n\nperform a cyclic proximal point algorithm. This can be done in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal Mℝ to minimize\nproxes_f: an Array of proximal maps (Functions) (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of f (see evaluation)\n\nwhere f and the proximal maps proxes_f can also be given directly as a ManifoldProximalMapObjective mpo\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Random) or the default linear one.\nλ=iter -> 1/iter: a function returning the (square summable but not summable) sequence of λ_i\nstopping_criterion=StopAfterIteration(5000)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/cyclic_proximal_point/#sec-cppa-technical-details","page":"Cyclic Proximal Point","title":"Technical details","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"The cyclic_proximal_point solver requires no additional functions to be available for your manifold, besides the ones you use in the proximal maps.","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"By default, one of the stopping criteria is StopWhenChangeLess, which either requires","category":"page"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified or the distance(M, p, q) for said default inverse retraction.","category":"page"},{"location":"solvers/cyclic_proximal_point/#State","page":"Cyclic Proximal Point","title":"State","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"CyclicProximalPointState","category":"page"},{"location":"solvers/cyclic_proximal_point/#Manopt.CyclicProximalPointState","page":"Cyclic Proximal Point","title":"Manopt.CyclicProximalPointState","text":"CyclicProximalPointState <: AbstractManoptSolverState\n\nstores options for the cyclic_proximal_point algorithm. These are the\n\nFields\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nλ: a function for the values of λ_k per iteration(cycle k\noder_type: whether to use a randomly permuted sequence (:FixedRandomOrder), a per cycle permuted sequence (:RandomOrder) or the default linear one.\n\nConstructor\n\nCyclicProximalPointState(M::AbstractManifold; kwargs...)\n\nGenerate the options\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\n\nKeyword arguments\n\nevaluation_order=:LinearOrder: soecify the order_type\nλ=i -> 1.0 / i a function to compute the λ_k k mathcal N,\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nstopping_criterion=StopAfterIteration(2000): a functor indicating that the stopping criterion is fulfilled\n\nSee also\n\ncyclic_proximal_point\n\n\n\n\n\n","category":"type"},{"location":"solvers/cyclic_proximal_point/#Debug-functions","page":"Cyclic Proximal Point","title":"Debug functions","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"DebugProximalParameter","category":"page"},{"location":"solvers/cyclic_proximal_point/#Manopt.DebugProximalParameter","page":"Cyclic Proximal Point","title":"Manopt.DebugProximalParameter","text":"DebugProximalParameter <: DebugAction\n\nprint the current iterates proximal point algorithm parameter given by AbstractManoptSolverStates o.λ.\n\n\n\n\n\n","category":"type"},{"location":"solvers/cyclic_proximal_point/#Record-functions","page":"Cyclic Proximal Point","title":"Record functions","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"RecordProximalParameter","category":"page"},{"location":"solvers/cyclic_proximal_point/#Manopt.RecordProximalParameter","page":"Cyclic Proximal Point","title":"Manopt.RecordProximalParameter","text":"RecordProximalParameter <: RecordAction\n\nrecord the current iterates proximal point algorithm parameter given by in AbstractManoptSolverStates o.λ.\n\n\n\n\n\n","category":"type"},{"location":"solvers/cyclic_proximal_point/#Literature","page":"Cyclic Proximal Point","title":"Literature","text":"","category":"section"},{"location":"solvers/cyclic_proximal_point/","page":"Cyclic Proximal Point","title":"Cyclic Proximal Point","text":"M. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.\n\n\n\n","category":"page"},{"location":"plans/objective/#A-manifold-objective","page":"Objective","title":"A manifold objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"The Objective describes that actual cost function and all its properties.","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractManifoldObjective\nAbstractDecoratedManifoldObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractManifoldObjective","page":"Objective","title":"Manopt.AbstractManifoldObjective","text":"AbstractManifoldObjective{E<:AbstractEvaluationType}\n\nDescribe the collection of the optimization function f mathcal M ℝ (or even a vectorial range) and its corresponding elements, which might for example be a gradient or (one or more) proximal maps.\n\nAll these elements should usually be implemented as functions (M, p) -> ..., or (M, X, p) -> ... that is\n\nthe first argument of these functions should be the manifold M they are defined on\nthe argument X is present, if the computation is performed in-place of X (see InplaceEvaluation)\nthe argument p is the place the function (f or one of its elements) is evaluated at.\n\nthe type T indicates the global AbstractEvaluationType.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.AbstractDecoratedManifoldObjective","page":"Objective","title":"Manopt.AbstractDecoratedManifoldObjective","text":"AbstractDecoratedManifoldObjective{E<:AbstractEvaluationType,O<:AbstractManifoldObjective}\n\nA common supertype for all decorators of AbstractManifoldObjectives to simplify dispatch. The second parameter should refer to the undecorated objective (the most inner one).\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Which has two main different possibilities for its containing functions concerning the evaluation mode, not necessarily the cost, but for example gradient in an AbstractManifoldGradientObjective.","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractEvaluationType\nAllocatingEvaluation\nInplaceEvaluation\nevaluation_type","category":"page"},{"location":"plans/objective/#Manopt.AbstractEvaluationType","page":"Objective","title":"Manopt.AbstractEvaluationType","text":"AbstractEvaluationType\n\nAn abstract type to specify the kind of evaluation a AbstractManifoldObjective supports.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.AllocatingEvaluation","page":"Objective","title":"Manopt.AllocatingEvaluation","text":"AllocatingEvaluation <: AbstractEvaluationType\n\nA parameter for a AbstractManoptProblem indicating that the problem uses functions that allocate memory for their result, they work out of place.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.InplaceEvaluation","page":"Objective","title":"Manopt.InplaceEvaluation","text":"InplaceEvaluation <: AbstractEvaluationType\n\nA parameter for a AbstractManoptProblem indicating that the problem uses functions that do not allocate memory but work on their input, in place.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.evaluation_type","page":"Objective","title":"Manopt.evaluation_type","text":"evaluation_type(mp::AbstractManoptProblem)\n\nGet the AbstractEvaluationType of the objective in AbstractManoptProblem mp.\n\n\n\n\n\nevaluation_type(::AbstractManifoldObjective{Teval})\n\nGet the AbstractEvaluationType of the objective.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Decorators-for-objectives","page":"Objective","title":"Decorators for objectives","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"An objective can be decorated using the following trait and function to initialize","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"dispatch_objective_decorator\nis_objective_decorator\ndecorate_objective!","category":"page"},{"location":"plans/objective/#Manopt.dispatch_objective_decorator","page":"Objective","title":"Manopt.dispatch_objective_decorator","text":"dispatch_objective_decorator(o::AbstractManoptSolverState)\n\nIndicate internally, whether an AbstractManifoldObjective o to be of decorating type, it stores (encapsulates) an object in itself, by default in the field o.objective.\n\nDecorators indicate this by returning Val{true} for further dispatch.\n\nThe default is Val{false}, so by default an state is not decorated.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.is_objective_decorator","page":"Objective","title":"Manopt.is_objective_decorator","text":"is_object_decorator(s::AbstractManifoldObjective)\n\nIndicate, whether AbstractManifoldObjective s are of decorator type.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.decorate_objective!","page":"Objective","title":"Manopt.decorate_objective!","text":"decorate_objective!(M, o::AbstractManifoldObjective)\n\ndecorate the AbstractManifoldObjectiveo with specific decorators.\n\nOptional arguments\n\noptional arguments provide necessary details on the decorators. A specific one is used to activate certain decorators.\n\ncache=missing: specify a cache. Currently :Simple is supported and :LRU if you load LRUCache.jl. For this case a tuple specifying what to cache and how many can be provided, has to be specified. For example (:LRU, [:Cost, :Gradient], 10) states that the last 10 used cost function evaluations and gradient evaluations should be stored. See objective_cache_factory for details.\ncount=missing: specify calls to the objective to be called, see ManifoldCountObjective for the full list\nobjective_type=:Riemannian: specify that an objective is :Riemannian or :Euclidean. The :Euclidean symbol is equivalent to specifying it as :Embedded, since in the end, both refer to converting an objective from the embedding (whether its Euclidean or not) to the Riemannian one.\n\nSee also\n\nobjective_cache_factory\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#subsection-embedded-objectives","page":"Objective","title":"Embedded objectives","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"EmbeddedManifoldObjective","category":"page"},{"location":"plans/objective/#Manopt.EmbeddedManifoldObjective","page":"Objective","title":"Manopt.EmbeddedManifoldObjective","text":"EmbeddedManifoldObjective{P, T, E, O2, O1<:AbstractManifoldObjective{E}} <:\n AbstractDecoratedManifoldObjective{E,O2}\n\nDeclare an objective to be defined in the embedding. This also declares the gradient to be defined in the embedding, and especially being the Riesz representer with respect to the metric in the embedding. The types can be used to still dispatch on also the undecorated objective type O2.\n\nFields\n\nobjective: the objective that is defined in the embedding\np=nothing: a point in the embedding.\nX=nothing: a tangent vector in the embedding\n\nWhen a point in the embedding p is provided, embed! is used in place of this point to reduce memory allocations. Similarly X is used when embedding tangent vectors\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#subsection-cache-objective","page":"Objective","title":"Cache objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Since single function calls, for example to the cost or the gradient, might be expensive, a simple cache objective exists as a decorator, that caches one cost value or gradient.","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"It can be activated/used with the cache= keyword argument available for every solver.","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.reset_counters!\nManopt.objective_cache_factory","category":"page"},{"location":"plans/objective/#Manopt.reset_counters!","page":"Objective","title":"Manopt.reset_counters!","text":"reset_counters(co::ManifoldCountObjective, value::Integer=0)\n\nReset all values in the count objective to value.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.objective_cache_factory","page":"Objective","title":"Manopt.objective_cache_factory","text":"objective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Symbol)\n\nGenerate a cached variant of the AbstractManifoldObjective o on the AbstractManifold M based on the symbol cache.\n\nThe following caches are available\n\n:Simple generates a SimpleManifoldCachedObjective\n:LRU generates a ManifoldCachedObjective where you should use the form (:LRU, [:Cost, :Gradient]) to specify what should be cached or (:LRU, [:Cost, :Gradient], 100) to specify the cache size. Here this variant defaults to (:LRU, [:Cost, :Gradient], 100), caching up to 100 cost and gradient values.[1]\n\n[1]: This cache requires LRUCache.jl to be loaded as well.\n\n\n\n\n\nobjective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Tuple{Symbol, Array, Array})\nobjective_cache_factory(M::AbstractManifold, o::AbstractManifoldObjective, cache::Tuple{Symbol, Array})\n\nGenerate a cached variant of the AbstractManifoldObjective o on the AbstractManifold M based on the symbol cache[1], where the second element cache[2] are further arguments to the cache and the optional third is passed down as keyword arguments.\n\nFor all available caches see the simpler variant with symbols.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#A-simple-cache","page":"Objective","title":"A simple cache","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"A first generic cache is always available, but it only caches one gradient and one cost function evaluation (for the same point).","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"SimpleManifoldCachedObjective","category":"page"},{"location":"plans/objective/#Manopt.SimpleManifoldCachedObjective","page":"Objective","title":"Manopt.SimpleManifoldCachedObjective","text":" SimpleManifoldCachedObjective{O<:AbstractManifoldGradientObjective{E,TC,TG}, P, T,C} <: AbstractManifoldGradientObjective{E,TC,TG}\n\nProvide a simple cache for an AbstractManifoldGradientObjective that is for a given point p this cache stores a point p and a gradient operatornamegrad f(p) in X as well as a cost value f(p) in c.\n\nBoth X and c are accompanied by booleans to keep track of their validity.\n\nConstructor\n\nSimpleManifoldCachedObjective(M::AbstractManifold, obj::AbstractManifoldGradientObjective; kwargs...)\n\nKeyword arguments\n\np=rand(M): a point on the manifold to initialize the cache with\nX=get_gradient(M, obj, p) or zero_vector(M,p): a tangent vector to store the gradient in, see also initialize=\nc=[get_cost](@ref)(M, obj, p)or0.0: a value to store the cost function ininitialize`\ninitialized=true: whether to initialize the cached X and c or not.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#A-generic-cache","page":"Objective","title":"A generic cache","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"For the more advanced cache, you need to implement some type of cache yourself, that provides a get! and implement init_caches. This is for example provided if you load LRUCache.jl. Then you obtain","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ManifoldCachedObjective\ninit_caches","category":"page"},{"location":"plans/objective/#Manopt.ManifoldCachedObjective","page":"Objective","title":"Manopt.ManifoldCachedObjective","text":"ManifoldCachedObjective{E,P,O<:AbstractManifoldObjective{<:E},C<:NamedTuple{}} <: AbstractDecoratedManifoldObjective{E,P}\n\nCreate a cache for an objective, based on a NamedTuple that stores some kind of cache.\n\nConstructor\n\nManifoldCachedObjective(M, o::AbstractManifoldObjective, caches::Vector{Symbol}; kwargs...)\n\nCreate a cache for the AbstractManifoldObjective where the Symbols in caches indicate, which function evaluations to cache.\n\nSupported symbols\n\nSymbol Caches calls to (incl. ! variants) Comment\n:Cost get_cost \n:EqualityConstraint get_equality_constraint(M, p, i) \n:EqualityConstraints get_equality_constraint(M, p, :) \n:GradEqualityConstraint get_grad_equality_constraint tangent vector per (p,i)\n:GradInequalityConstraint get_inequality_constraint tangent vector per (p,i)\n:Gradient get_gradient(M,p) tangent vectors\n:Hessian get_hessian tangent vectors\n:InequalityConstraint get_inequality_constraint(M, p, j) \n:InequalityConstraints get_inequality_constraint(M, p, :) \n:Preconditioner get_preconditioner tangent vectors\n:ProximalMap get_proximal_map point per (p,λ,i)\n:StochasticGradients get_gradients vector of tangent vectors\n:StochasticGradient get_gradient(M, p, i) tangent vector per (p,i)\n:SubGradient get_subgradient tangent vectors\n:SubtrahendGradient get_subtrahend_gradient tangent vectors\n\nKeyword arguments\n\np=rand(M): the type of the keys to be used in the caches. Defaults to the default representation on M.\nvalue=get_cost(M, objective, p): the type of values for numeric values in the cache\nX=zero_vector(M,p): the type of values to be cached for gradient and Hessian calls.\ncache=[:Cost]: a vector of symbols indicating which function calls should be cached.\ncache_size=10: number of (least recently used) calls to cache\ncache_sizes=Dict{Symbol,Int}(): a named tuple or dictionary specifying the sizes individually for each cache.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.init_caches","page":"Objective","title":"Manopt.init_caches","text":"init_caches(caches, T::Type{LRU}; kwargs...)\n\nGiven a vector of symbols caches, this function sets up the NamedTuple of caches, where T is the type of cache to use.\n\nKeyword arguments\n\np=rand(M): a point on a manifold, to both infer its type for keys and initialize caches\nvalue=0.0: a value both typing and initialising number-caches, the default is for (Float) values like the cost.\nX=zero_vector(M, p): a tangent vector at p to both type and initialize tangent vector caches\ncache_size=10: a default cache size to use\ncache_sizes=Dict{Symbol,Int}(): a dictionary of sizes for the caches to specify different (non-default) sizes\n\n\n\n\n\ninit_caches(M::AbstractManifold, caches, T; kwargs...)\n\nGiven a vector of symbols caches, this function sets up the NamedTuple of caches for points/vectors on M, where T is the type of cache to use.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#subsection-count-objective","page":"Objective","title":"Count objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ManifoldCountObjective","category":"page"},{"location":"plans/objective/#Manopt.ManifoldCountObjective","page":"Objective","title":"Manopt.ManifoldCountObjective","text":"ManifoldCountObjective{E,P,O<:AbstractManifoldObjective,I<:Integer} <: AbstractDecoratedManifoldObjective{E,P}\n\nA wrapper for any AbstractManifoldObjective of type O to count different calls to parts of the objective.\n\nFields\n\ncounts a dictionary of symbols mapping to integers keeping the counted values\nobjective the wrapped objective\n\nSupported symbols\n\nSymbol Counts calls to (incl. ! variants) Comment\n:Cost get_cost \n:EqualityConstraint get_equality_constraint requires vector of counters\n:EqualityConstraints get_equality_constraint when evaluating all of them with :\n:GradEqualityConstraint get_grad_equality_constraint requires vector of counters\n:GradEqualityConstraints get_grad_equality_constraint when evaluating all of them with :\n:GradInequalityConstraint get_inequality_constraint requires vector of counters\n:GradInequalityConstraints get_inequality_constraint when evaluating all of them with :\n:Gradient get_gradient(M,p) \n:Hessian get_hessian \n:InequalityConstraint get_inequality_constraint requires vector of counters\n:InequalityConstraints get_inequality_constraint when evaluating all of them with :\n:Preconditioner get_preconditioner \n:ProximalMap get_proximal_map \n:StochasticGradients get_gradients \n:StochasticGradient get_gradient(M, p, i) \n:SubGradient get_subgradient \n:SubtrahendGradient get_subtrahend_gradient \n\nConstructors\n\nManifoldCountObjective(objective::AbstractManifoldObjective, counts::Dict{Symbol, <:Integer})\n\nInitialise the ManifoldCountObjective to wrap objective initializing the set of counts\n\nManifoldCountObjective(M::AbstractManifold, objective::AbstractManifoldObjective, count::AbstractVecor{Symbol}, init=0)\n\nCount function calls on objective using the symbols in count initialising all entries to init.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Internal-decorators","page":"Objective","title":"Internal decorators","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ReturnManifoldObjective","category":"page"},{"location":"plans/objective/#Manopt.ReturnManifoldObjective","page":"Objective","title":"Manopt.ReturnManifoldObjective","text":"ReturnManifoldObjective{E,O2,O1<:AbstractManifoldObjective{E}} <:\n AbstractDecoratedManifoldObjective{E,O2}\n\nA wrapper to indicate that get_solver_result should return the inner objective.\n\nThe types are such that one can still dispatch on the undecorated type O2 of the original objective as well.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Specific-Objective-typed-and-their-access-functions","page":"Objective","title":"Specific Objective typed and their access functions","text":"","category":"section"},{"location":"plans/objective/#Cost-objective","page":"Objective","title":"Cost objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractManifoldCostObjective\nManifoldCostObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractManifoldCostObjective","page":"Objective","title":"Manopt.AbstractManifoldCostObjective","text":"AbstractManifoldCostObjective{T<:AbstractEvaluationType} <: AbstractManifoldObjective{T}\n\nRepresenting objectives on manifolds with a cost function implemented.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ManifoldCostObjective","page":"Objective","title":"Manopt.ManifoldCostObjective","text":"ManifoldCostObjective{T, TC} <: AbstractManifoldCostObjective{T, TC}\n\nspecify an AbstractManifoldObjective that does only have information about the cost function f mathbb M ℝ implemented as a function (M, p) -> c to compute the cost value c at p on the manifold M.\n\ncost: a function f mathcal M ℝ to minimize\n\nConstructors\n\nManifoldCostObjective(f)\n\nGenerate a problem. While this Problem does not have any allocating functions, the type T can be set for consistency reasons with other problems.\n\nUsed with\n\nNelderMead, particle_swarm\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_cost","category":"page"},{"location":"plans/objective/#Manopt.get_cost","page":"Objective","title":"Manopt.get_cost","text":"get_cost(amp::AbstractManoptProblem, p)\n\nevaluate the cost function f stored within the AbstractManifoldObjective of an AbstractManoptProblem amp at the point p.\n\n\n\n\n\nget_cost(M::AbstractManifold, obj::AbstractManifoldObjective, p)\n\nevaluate the cost function f defined on M stored within the AbstractManifoldObjective at the point p.\n\n\n\n\n\nget_cost(M::AbstractManifold, mco::AbstractManifoldCostObjective, p)\n\nEvaluate the cost function from within the AbstractManifoldCostObjective on M at p.\n\nBy default this implementation assumed that the cost is stored within mco.cost.\n\n\n\n\n\nget_cost(TpM, trmo::TrustRegionModelObjective, X)\n\nEvaluate the tangent space TrustRegionModelObjective\n\nm(X) = f(p) + operatornamegrad f(p) X _p + frac12 operatornameHess f(p)X X_p\n\n\n\n\n\nget_cost(TpM, trmo::AdaptiveRagularizationWithCubicsModelObjective, X)\n\nEvaluate the tangent space AdaptiveRagularizationWithCubicsModelObjective\n\nm(X) = f(p) + operatornamegrad f(p) X _p + frac12 operatornameHess f(p)X X_p\n + fracσ3 lVert X rVert^3\n\nat X, cf. Eq. (33) in [ABBC20].\n\n\n\n\n\nget_cost(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)\n\nevaluate the cost\n\nf(X) = frac12 lVert mathcal AX + b rVert_p^2qquad X T_pmathcal M\n\nat X.\n\n\n\n\n\nget_cost(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p, i)\n\nEvaluate the ith summand of the cost.\n\nIf you use a single function for the stochastic cost, then only the index ì=1` is available to evaluate the whole cost.\n\n\n\n\n\nget_cost(M::AbstractManifold,emo::EmbeddedManifoldObjective, p)\n\nEvaluate the cost function of an objective defined in the embedding by first embedding p before calling the cost function stored in the EmbeddedManifoldObjective.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"and internally","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_cost_function","category":"page"},{"location":"plans/objective/#Manopt.get_cost_function","page":"Objective","title":"Manopt.get_cost_function","text":"get_cost_function(amco::AbstractManifoldCostObjective)\n\nreturn the function to evaluate (just) the cost f(p)=c as a function (M,p) -> c.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Gradient-objectives","page":"Objective","title":"Gradient objectives","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractManifoldGradientObjective\nManifoldGradientObjective\nManifoldAlternatingGradientObjective\nManifoldStochasticGradientObjective\nNonlinearLeastSquaresObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractManifoldGradientObjective","page":"Objective","title":"Manopt.AbstractManifoldGradientObjective","text":"AbstractManifoldGradientObjective{E<:AbstractEvaluationType, TC, TG} <: AbstractManifoldCostObjective{E, TC}\n\nAn abstract type for all objectives that provide a (full) gradient, where T is a AbstractEvaluationType for the gradient function.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ManifoldGradientObjective","page":"Objective","title":"Manopt.ManifoldGradientObjective","text":"ManifoldGradientObjective{T<:AbstractEvaluationType} <: AbstractManifoldGradientObjective{T}\n\nspecify an objective containing a cost and its gradient\n\nFields\n\ncost: a function f mathcal M ℝ\ngradient!!: the gradient operatornamegradf mathcal M mathcal Tmathcal M of the cost function f.\n\nDepending on the AbstractEvaluationType T the gradient can have to forms\n\nas a function (M, p) -> X that allocates memory for X, an AllocatingEvaluation\nas a function (M, X, p) -> X that work in place of X, an InplaceEvaluation\n\nConstructors\n\nManifoldGradientObjective(cost, gradient; evaluation=AllocatingEvaluation())\n\nUsed with\n\ngradient_descent, conjugate_gradient_descent, quasi_Newton\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ManifoldAlternatingGradientObjective","page":"Objective","title":"Manopt.ManifoldAlternatingGradientObjective","text":"ManifoldAlternatingGradientObjective{E<:AbstractEvaluationType,TCost,TGradient} <: AbstractManifoldGradientObjective{E}\n\nAn alternating gradient objective consists of\n\na cost function F(x)\na gradient operatornamegradF that is either\ngiven as one function operatornamegradF returning a tangent vector X on M or\nan array of gradient functions operatornamegradF_i, ì=1,…,n s each returning a component of the gradient\nwhich might be allocating or mutating variants, but not a mix of both.\n\nnote: Note\nThis Objective is usually defined using the ProductManifold from Manifolds.jl, so Manifolds.jl to be loaded.\n\nConstructors\n\nManifoldAlternatingGradientObjective(F, gradF::Function;\n evaluation=AllocatingEvaluation()\n)\nManifoldAlternatingGradientObjective(F, gradF::AbstractVector{<:Function};\n evaluation=AllocatingEvaluation()\n)\n\nCreate a alternating gradient problem with an optional cost and the gradient either as one function (returning an array) or a vector of functions.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ManifoldStochasticGradientObjective","page":"Objective","title":"Manopt.ManifoldStochasticGradientObjective","text":"ManifoldStochasticGradientObjective{T<:AbstractEvaluationType} <: AbstractManifoldGradientObjective{T}\n\nA stochastic gradient objective consists of\n\na(n optional) cost function f(p) = displaystylesum_i=1^n f_i(p)\nan array of gradients, operatornamegradf_i(p) i=1ldotsn which can be given in two forms\nas one single function (mathcal M p) (X_1X_n) (T_pmathcal M)^n\nas a vector of functions bigl( (mathcal M p) X_1 (mathcal M p) X_nbigr).\n\nWhere both variants can also be provided as InplaceEvaluation functions (M, X, p) -> X, where X is the vector of X1,...,Xn and (M, X1, p) -> X1, ..., (M, Xn, p) -> Xn, respectively.\n\nConstructors\n\nManifoldStochasticGradientObjective(\n grad_f::Function;\n cost=Missing(),\n evaluation=AllocatingEvaluation()\n)\nManifoldStochasticGradientObjective(\n grad_f::AbstractVector{<:Function};\n cost=Missing(), evaluation=AllocatingEvaluation()\n)\n\nCreate a Stochastic gradient problem with the gradient either as one function (returning an array of tangent vectors) or a vector of functions (each returning one tangent vector).\n\nThe optional cost can also be given as either a single function (returning a number) pr a vector of functions, each returning a value.\n\nUsed with\n\nstochastic_gradient_descent\n\nNote that this can also be used with a gradient_descent, since the (complete) gradient is just the sums of the single gradients.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.NonlinearLeastSquaresObjective","page":"Objective","title":"Manopt.NonlinearLeastSquaresObjective","text":"NonlinearLeastSquaresObjective{T<:AbstractEvaluationType} <: AbstractManifoldObjective{T}\n\nA type for nonlinear least squares problems. T is a AbstractEvaluationType for the F and Jacobian functions.\n\nSpecify a nonlinear least squares problem\n\nFields\n\nf a function f mathcal M ℝ^d to minimize\njacobian!! Jacobian of the function f\njacobian_tangent_basis the basis of tangent space used for computing the Jacobian.\nnum_components number of values returned by f (equal to d).\n\nDepending on the AbstractEvaluationType T the function F has to be provided:\n\nas a functions (M::AbstractManifold, p) -> v that allocates memory for v itself for an AllocatingEvaluation,\nas a function (M::AbstractManifold, v, p) -> v that works in place of v for a InplaceEvaluation.\n\nAlso the Jacobian jacF is required:\n\nas a functions (M::AbstractManifold, p; basis_domain::AbstractBasis) -> v that allocates memory for v itself for an AllocatingEvaluation,\nas a function (M::AbstractManifold, v, p; basis_domain::AbstractBasis) -> v that works in place of v for an InplaceEvaluation.\n\nConstructors\n\nNonlinearLeastSquaresProblem(M, F, jacF, num_components; evaluation=AllocatingEvaluation(), jacobian_tangent_basis=DefaultOrthonormalBasis())\n\nSee also\n\nLevenbergMarquardt, LevenbergMarquardtState\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"There is also a second variant, if just one function is responsible for computing the cost and the gradient","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ManifoldCostGradientObjective","category":"page"},{"location":"plans/objective/#Manopt.ManifoldCostGradientObjective","page":"Objective","title":"Manopt.ManifoldCostGradientObjective","text":"ManifoldCostGradientObjective{T} <: AbstractManifoldObjective{T}\n\nspecify an objective containing one function to perform a combined computation of cost and its gradient\n\nFields\n\ncostgrad!!: a function that computes both the cost f mathcal M ℝ and its gradient operatornamegradf mathcal M mathcal Tmathcal M\n\nDepending on the AbstractEvaluationType T the gradient can have to forms\n\nas a function (M, p) -> (c, X) that allocates memory for the gradient X, an AllocatingEvaluation\nas a function (M, X, p) -> (c, X) that work in place of X, an InplaceEvaluation\n\nConstructors\n\nManifoldCostGradientObjective(costgrad; evaluation=AllocatingEvaluation())\n\nUsed with\n\ngradient_descent, conjugate_gradient_descent, quasi_Newton\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-2","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_gradient\nget_gradients","category":"page"},{"location":"plans/objective/#Manopt.get_gradient","page":"Objective","title":"Manopt.get_gradient","text":"get_gradient(s::AbstractManoptSolverState)\n\nreturn the (last stored) gradient within AbstractManoptSolverStates`. By default also undecorates the state beforehand\n\n\n\n\n\nget_gradient(amp::AbstractManoptProblem, p)\nget_gradient!(amp::AbstractManoptProblem, X, p)\n\nevaluate the gradient of an AbstractManoptProblem amp at the point p.\n\nThe evaluation is done in place of X for the !-variant.\n\n\n\n\n\nget_gradient(M::AbstractManifold, mgo::AbstractManifoldGradientObjective{T}, p)\nget_gradient!(M::AbstractManifold, X, mgo::AbstractManifoldGradientObjective{T}, p)\n\nevaluate the gradient of a AbstractManifoldGradientObjective{T} mgo at p.\n\nThe evaluation is done in place of X for the !-variant. The T=AllocatingEvaluation problem might still allocate memory within. When the non-mutating variant is called with a T=InplaceEvaluation memory for the result is allocated.\n\nNote that the order of parameters follows the philosophy of Manifolds.jl, namely that even for the mutating variant, the manifold is the first parameter and the (in-place) tangent vector X comes second.\n\n\n\n\n\nget_gradient(agst::AbstractGradientSolverState)\n\nreturn the gradient stored within gradient options. THe default returns agst.X.\n\n\n\n\n\nget_gradient(M::AbstractManifold, vgf::VectorGradientFunction, p, i)\nget_gradient(M::AbstractManifold, vgf::VectorGradientFunction, p, i, range)\nget_gradient!(M::AbstractManifold, X, vgf::VectorGradientFunction, p, i)\nget_gradient!(M::AbstractManifold, X, vgf::VectorGradientFunction, p, i, range)\n\nEvaluate the gradients of the vector function vgf on the manifold M at p and the values given in range, specifying the representation of the gradients.\n\nSince i is assumed to be a linear index, you can provide\n\na single integer\na UnitRange to specify a range to be returned like 1:3\na BitVector specifying a selection\na AbstractVector{<:Integer} to specify indices\n: to return the vector of all gradients\n\n\n\n\n\nget_gradient(TpM, trmo::TrustRegionModelObjective, X)\n\nEvaluate the gradient of the TrustRegionModelObjective\n\noperatornamegrad m(X) = operatornamegrad f(p) + operatornameHess f(p)X\n\n\n\n\n\nget_gradient(TpM, trmo::AdaptiveRagularizationWithCubicsModelObjective, X)\n\nEvaluate the gradient of the AdaptiveRagularizationWithCubicsModelObjective\n\noperatornamegrad m(X) = operatornamegrad f(p) + operatornameHess f(p)X\n + σlVert X rVert X\n\nat X, cf. Eq. (37) in [ABBC20].\n\n\n\n\n\nget_gradient(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)\nget_gradient!(TpM::TangentSpace, Y, slso::SymmetricLinearSystemObjective, X)\n\nevaluate the gradient of\n\nf(X) = frac12 lVert mathcal AX + b rVert_p^2qquad X T_pmathcal M\n\nWhich is operatornamegrad f(X) = mathcal AX+b. This can be computed in-place of Y.\n\n\n\n\n\nget_gradient(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p, k)\nget_gradient!(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, Y, p, k)\n\nEvaluate one of the summands gradients operatornamegradf_k, k1n, at x (in place of Y).\n\nIf you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient required for allocation) can not be determined.\n\n\n\n\n\nget_gradient(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p)\nget_gradient!(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, X, p)\n\nEvaluate the complete gradient operatornamegrad f = displaystylesum_i=1^n operatornamegrad f_i(p) at p (in place of X).\n\nIf you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient required for allocation) can not be determined.\n\n\n\n\n\nget_gradient(M::AbstractManifold, emo::EmbeddedManifoldObjective, p)\nget_gradient!(M::AbstractManifold, X, emo::EmbeddedManifoldObjective, p)\n\nEvaluate the gradient function of an objective defined in the embedding, that is embed p before calling the gradient function stored in the EmbeddedManifoldObjective.\n\nThe returned gradient is then converted to a Riemannian gradient calling riemannian_gradient.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_gradients","page":"Objective","title":"Manopt.get_gradients","text":"get_gradients(M::AbstractManifold, sgo::ManifoldStochasticGradientObjective, p)\nget_gradients!(M::AbstractManifold, X, sgo::ManifoldStochasticGradientObjective, p)\n\nEvaluate all summands gradients operatornamegradf_i_i=1^n at p (in place of X).\n\nIf you use a single function for the stochastic gradient, that works in-place, then get_gradient is not available, since the length (or number of elements of the gradient) can not be determined.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"and internally","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_gradient_function","category":"page"},{"location":"plans/objective/#Manopt.get_gradient_function","page":"Objective","title":"Manopt.get_gradient_function","text":"get_gradient_function(amgo::AbstractManifoldGradientObjective, recursive=false)\n\nreturn the function to evaluate (just) the gradient operatornamegrad f(p), where either the gradient function using the decorator or without the decorator is used.\n\nBy default recursive is set to false, since usually to just pass the gradient function somewhere, one still wants for example the cached one or the one that still counts calls.\n\nDepending on the AbstractEvaluationType E this is a function\n\n(M, p) -> X for the AllocatingEvaluation case\n(M, X, p) -> X for the InplaceEvaluation working in-place of X.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Internal-helpers","page":"Objective","title":"Internal helpers","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_gradient_from_Jacobian!","category":"page"},{"location":"plans/objective/#Manopt.get_gradient_from_Jacobian!","page":"Objective","title":"Manopt.get_gradient_from_Jacobian!","text":"get_gradient_from_Jacobian!(\n M::AbstractManifold,\n X,\n nlso::NonlinearLeastSquaresObjective{InplaceEvaluation},\n p,\n Jval=zeros(nlso.num_components, manifold_dimension(M)),\n)\n\nCompute gradient of NonlinearLeastSquaresObjective nlso at point p in place of X, with temporary Jacobian stored in the optional argument Jval.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Subgradient-objective","page":"Objective","title":"Subgradient objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ManifoldSubgradientObjective","category":"page"},{"location":"plans/objective/#Manopt.ManifoldSubgradientObjective","page":"Objective","title":"Manopt.ManifoldSubgradientObjective","text":"ManifoldSubgradientObjective{T<:AbstractEvaluationType,C,S} <:AbstractManifoldCostObjective{T, C}\n\nA structure to store information about a objective for a subgradient based optimization problem\n\nFields\n\ncost: the function f to be minimized\nsubgradient: a function returning a subgradient f of f\n\nConstructor\n\nManifoldSubgradientObjective(f, ∂f)\n\nGenerate the ManifoldSubgradientObjective for a subgradient objective, consisting of a (cost) function f(M, p) and a function ∂f(M, p) that returns a not necessarily deterministic element from the subdifferential at p on a manifold M.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-3","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_subgradient","category":"page"},{"location":"plans/objective/#Manopt.get_subgradient","page":"Objective","title":"Manopt.get_subgradient","text":"X = get_subgradient(M::AbstractManifold, sgo::AbstractManifoldGradientObjective, p)\nget_subgradient!(M::AbstractManifold, X, sgo::AbstractManifoldGradientObjective, p)\n\nEvaluate the subgradient, which for the case of a objective having a gradient, means evaluating the gradient itself.\n\nWhile in general, the result might not be deterministic, for this case it is.\n\n\n\n\n\nget_subgradient(amp::AbstractManoptProblem, p)\nget_subgradient!(amp::AbstractManoptProblem, X, p)\n\nevaluate the subgradient of an AbstractManoptProblem amp at point p.\n\nThe evaluation is done in place of X for the !-variant. The result might not be deterministic, one element of the subdifferential is returned.\n\n\n\n\n\nX = get_subgradient(M;;AbstractManifold, sgo::ManifoldSubgradientObjective, p)\nget_subgradient!(M;;AbstractManifold, X, sgo::ManifoldSubgradientObjective, p)\n\nEvaluate the (sub)gradient of a ManifoldSubgradientObjective sgo at the point p.\n\nThe evaluation is done in place of X for the !-variant. The result might not be deterministic, one element of the subdifferential is returned.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Proximal-map-objective","page":"Objective","title":"Proximal map objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ManifoldProximalMapObjective","category":"page"},{"location":"plans/objective/#Manopt.ManifoldProximalMapObjective","page":"Objective","title":"Manopt.ManifoldProximalMapObjective","text":"ManifoldProximalMapObjective{E<:AbstractEvaluationType, TC, TP, V <: Vector{<:Integer}} <: AbstractManifoldCostObjective{E, TC}\n\nspecify a problem for solvers based on the evaluation of proximal maps, which represents proximal maps operatornameprox_λf_i for summands f = f_1 + f_2+ + f_N of the cost function f.\n\nFields\n\ncost: a function fmathcal Mℝ to minimize\nproxes: proximal maps operatornameprox_λf_imathcal M mathcal M as functions (M, λ, p) -> q or in-place (M, q, λ, p).\nnumber_of_proxes: number of proximal maps per function, to specify when one of the maps is a combined one such that the proximal maps functions return more than one entry per function, you have to adapt this value. if not specified, it is set to one prox per function.\n\nConstructor\n\nManifoldProximalMapObjective(f, proxes_f::Union{Tuple,AbstractVector}, numer_of_proxes=onex(length(proxes));\n evaluation=Allocating)\n\nGenerate a proximal problem with a tuple or vector of funtions, where by default every function computes a single prox of one component of f.\n\nManifoldProximalMapObjective(f, prox_f); evaluation=Allocating)\n\nGenerate a proximal objective for f and its proxial map operatornameprox_λf\n\nSee also\n\ncyclic_proximal_point, get_cost, get_proximal_map\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-4","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_proximal_map","category":"page"},{"location":"plans/objective/#Manopt.get_proximal_map","page":"Objective","title":"Manopt.get_proximal_map","text":"q = get_proximal_map(M::AbstractManifold, mpo::ManifoldProximalMapObjective, λ, p)\nget_proximal_map!(M::AbstractManifold, q, mpo::ManifoldProximalMapObjective, λ, p)\nq = get_proximal_map(M::AbstractManifold, mpo::ManifoldProximalMapObjective, λ, p, i)\nget_proximal_map!(M::AbstractManifold, q, mpo::ManifoldProximalMapObjective, λ, p, i)\n\nevaluate the (ith) proximal map of ManifoldProximalMapObjective p at the point p of p.M with parameter λ0.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Hessian-objective","page":"Objective","title":"Hessian objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractManifoldHessianObjective\nManifoldHessianObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractManifoldHessianObjective","page":"Objective","title":"Manopt.AbstractManifoldHessianObjective","text":"AbstractManifoldHessianObjective{T<:AbstractEvaluationType,TC,TG,TH} <: AbstractManifoldGradientObjective{T,TC,TG}\n\nAn abstract type for all objectives that provide a (full) Hessian, where T is a AbstractEvaluationType for the gradient and Hessian functions.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ManifoldHessianObjective","page":"Objective","title":"Manopt.ManifoldHessianObjective","text":"ManifoldHessianObjective{T<:AbstractEvaluationType,C,G,H,Pre} <: AbstractManifoldHessianObjective{T,C,G,H}\n\nspecify a problem for Hessian based algorithms.\n\nFields\n\ncost: a function fmathcal Mℝ to minimize\ngradient: the gradient operatornamegradfmathcal M mathcal Tmathcal M of the cost function f\nhessian: the Hessian operatornameHessf(x) mathcal T_x mathcal M mathcal T_x mathcal M of the cost function f\npreconditioner: the symmetric, positive definite preconditioner as an approximation of the inverse of the Hessian of f, a map with the same input variables as the hessian to numerically stabilize iterations when the Hessian is ill-conditioned\n\nDepending on the AbstractEvaluationType T the gradient and can have to forms\n\nas a function (M, p) -> X and (M, p, X) -> Y, resp., an AllocatingEvaluation\nas a function (M, X, p) -> X and (M, Y, p, X), resp., an InplaceEvaluation\n\nConstructor\n\nManifoldHessianObjective(f, grad_f, Hess_f, preconditioner = (M, p, X) -> X;\n evaluation=AllocatingEvaluation())\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-5","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_hessian\nget_preconditioner","category":"page"},{"location":"plans/objective/#Manopt.get_hessian","page":"Objective","title":"Manopt.get_hessian","text":"Y = get_hessian(amp::AbstractManoptProblem{T}, p, X)\nget_hessian!(amp::AbstractManoptProblem{T}, Y, p, X)\n\nevaluate the Hessian of an AbstractManoptProblem amp at p applied to a tangent vector X, computing operatornameHessf(q)X, which can also happen in-place of Y.\n\n\n\n\n\nget_hessian(M::AbstractManifold, vgf::VectorHessianFunction, p, X, i)\nget_hessian(M::AbstractManifold, vgf::VectorHessianFunction, p, X, i, range)\nget_hessian!(M::AbstractManifold, X, vgf::VectorHessianFunction, p, X, i)\nget_hessian!(M::AbstractManifold, X, vgf::VectorHessianFunction, p, X, i, range)\n\nEvaluate the Hessians of the vector function vgf on the manifold M at p in direction X and the values given in range, specifying the representation of the gradients.\n\nSince i is assumed to be a linear index, you can provide\n\na single integer\na UnitRange to specify a range to be returned like 1:3\na BitVector specifying a selection\na AbstractVector{<:Integer} to specify indices\n: to return the vector of all gradients\n\n\n\n\n\nget_hessian(TpM, trmo::TrustRegionModelObjective, X)\n\nEvaluate the Hessian of the TrustRegionModelObjective\n\noperatornameHess m(X)Y = operatornameHess f(p)Y\n\n\n\n\n\nget_Hessian(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X, V)\nget_Hessian!(TpM::TangentSpace, W, slso::SymmetricLinearSystemObjective, X, V)\n\nevaluate the Hessian of\n\nf(X) = frac12 lVert mathcal AX + b rVert_p^2qquad X T_pmathcal M\n\nWhich is operatornameHess f(X)Y = mathcal AV. This can be computed in-place of W.\n\n\n\n\n\nget_hessian(M::AbstractManifold, emo::EmbeddedManifoldObjective, p, X)\nget_hessian!(M::AbstractManifold, Y, emo::EmbeddedManifoldObjective, p, X)\n\nEvaluate the Hessian of an objective defined in the embedding, that is embed p and X before calling the Hessian function stored in the EmbeddedManifoldObjective.\n\nThe returned Hessian is then converted to a Riemannian Hessian calling riemannian_Hessian.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_preconditioner","page":"Objective","title":"Manopt.get_preconditioner","text":"get_preconditioner(amp::AbstractManoptProblem, p, X)\n\nevaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function f) of a AbstractManoptProblem amps objective at the point p applied to a tangent vector X.\n\n\n\n\n\nget_preconditioner(M::AbstractManifold, mho::ManifoldHessianObjective, p, X)\n\nevaluate the symmetric, positive definite preconditioner (approximation of the inverse of the Hessian of the cost function F) of a ManifoldHessianObjective mho at the point p applied to a tangent vector X.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"and internally","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"get_hessian_function","category":"page"},{"location":"plans/objective/#Manopt.get_hessian_function","page":"Objective","title":"Manopt.get_hessian_function","text":"get_gradient_function(amgo::AbstractManifoldGradientObjective{E<:AbstractEvaluationType})\n\nreturn the function to evaluate (just) the Hessian operatornameHess f(p). Depending on the AbstractEvaluationType E this is a function\n\n(M, p, X) -> Y for the AllocatingEvaluation case\n(M, Y, p, X) -> X for the InplaceEvaluation, working in-place of Y.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Primal-dual-based-objectives","page":"Objective","title":"Primal-dual based objectives","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractPrimalDualManifoldObjective\nPrimalDualManifoldObjective\nPrimalDualManifoldSemismoothNewtonObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractPrimalDualManifoldObjective","page":"Objective","title":"Manopt.AbstractPrimalDualManifoldObjective","text":"AbstractPrimalDualManifoldObjective{E<:AbstractEvaluationType,C,P} <: AbstractManifoldCostObjective{E,C}\n\nA common abstract super type for objectives that consider primal-dual problems.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.PrimalDualManifoldObjective","page":"Objective","title":"Manopt.PrimalDualManifoldObjective","text":"PrimalDualManifoldObjective{T<:AbstractEvaluationType} <: AbstractPrimalDualManifoldObjective{T}\n\nDescribes an Objective linearized or exact Chambolle-Pock algorithm, cf. [BHS+21], [CP11]\n\nFields\n\nAll fields with !! can either be in-place or allocating functions, which should be set depending on the evaluation= keyword in the constructor and stored in T <: AbstractEvaluationType.\n\ncost: F + G(Λ()) to evaluate interim cost function values\nlinearized_forward_operator!!: linearized operator for the forward operation in the algorithm DΛ\nlinearized_adjoint_operator!!: the adjoint differential (DΛ)^* mathcal N Tmathcal M\nprox_f!!: the proximal map belonging to f\nprox_G_dual!!: the proximal map belonging to g_n^*\nΛ!!: the forward operator (if given) Λ mathcal M mathcal N\n\nEither the linearized operator DΛ or Λ are required usually.\n\nConstructor\n\nPrimalDualManifoldObjective(cost, prox_f, prox_G_dual, adjoint_linearized_operator;\n linearized_forward_operator::Union{Function,Missing}=missing,\n Λ::Union{Function,Missing}=missing,\n evaluation::AbstractEvaluationType=AllocatingEvaluation()\n)\n\nThe last optional argument can be used to provide the 4 or 5 functions as allocating or mutating (in place computation) ones. Note that the first argument is always the manifold under consideration, the mutated one is the second.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.PrimalDualManifoldSemismoothNewtonObjective","page":"Objective","title":"Manopt.PrimalDualManifoldSemismoothNewtonObjective","text":"PrimalDualManifoldSemismoothNewtonObjective{E<:AbstractEvaluationType, TC, LO, ALO, PF, DPF, PG, DPG, L} <: AbstractPrimalDualManifoldObjective{E, TC, PF}\n\nDescribes a Problem for the Primal-dual Riemannian semismooth Newton algorithm. [DL21]\n\nFields\n\ncost: F + G(Λ()) to evaluate interim cost function values\nlinearized_operator: the linearization DΛ() of the operator Λ().\nlinearized_adjoint_operator: the adjoint differential (DΛ)^* mathcal N Tmathcal M\nprox_F: the proximal map belonging to F\ndiff_prox_F: the (Clarke Generalized) differential of the proximal maps of F\nprox_G_dual: the proximal map belonging to G^\\ast_n`\ndiff_prox_dual_G: the (Clarke Generalized) differential of the proximal maps of G^ast_n\nΛ: the exact forward operator. This operator is required if Λ(m)=n does not hold.\n\nConstructor\n\nPrimalDualManifoldSemismoothNewtonObjective(cost, prox_F, prox_G_dual, forward_operator, adjoint_linearized_operator,Λ)\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-6","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"adjoint_linearized_operator\nforward_operator\nget_differential_dual_prox\nget_differential_primal_prox\nget_dual_prox\nget_primal_prox\nlinearized_forward_operator","category":"page"},{"location":"plans/objective/#Manopt.adjoint_linearized_operator","page":"Objective","title":"Manopt.adjoint_linearized_operator","text":"X = adjoint_linearized_operator(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, m, n, Y)\nadjoint_linearized_operator(N::AbstractManifold, X, apdmo::AbstractPrimalDualManifoldObjective, m, n, Y)\n\nEvaluate the adjoint of the linearized forward operator of (DΛ(m))^*Y stored within the AbstractPrimalDualManifoldObjective (in place of X). Since YT_nmathcal N, both m and n=Λ(m) are necessary arguments, mainly because the forward operator Λ might be missing in p.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.forward_operator","page":"Objective","title":"Manopt.forward_operator","text":"q = forward_operator(M::AbstractManifold, N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, p)\nforward_operator!(M::AbstractManifold, N::AbstractManifold, q, apdmo::AbstractPrimalDualManifoldObjective, p)\n\nEvaluate the forward operator of Λ(x) stored within the TwoManifoldProblem (in place of q).\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_differential_dual_prox","page":"Objective","title":"Manopt.get_differential_dual_prox","text":"η = get_differential_dual_prox(N::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective, n, τ, X, ξ)\nget_differential_dual_prox!(N::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective, η, n, τ, X, ξ)\n\nEvaluate the differential proximal map of G_n^* stored within PrimalDualManifoldSemismoothNewtonObjective\n\nDoperatornameprox_τG_n^*(X)ξ\n\nwhich can also be computed in place of η.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_differential_primal_prox","page":"Objective","title":"Manopt.get_differential_primal_prox","text":"y = get_differential_primal_prox(M::AbstractManifold, pdsno::PrimalDualManifoldSemismoothNewtonObjective σ, x)\nget_differential_primal_prox!(p::TwoManifoldProblem, y, σ, x)\n\nEvaluate the differential proximal map of F stored within AbstractPrimalDualManifoldObjective\n\nDoperatornameprox_σF(x)X\n\nwhich can also be computed in place of y.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_dual_prox","page":"Objective","title":"Manopt.get_dual_prox","text":"Y = get_dual_prox(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, n, τ, X)\nget_dual_prox!(N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, Y, n, τ, X)\n\nEvaluate the proximal map of g_n^* stored within AbstractPrimalDualManifoldObjective\n\n Y = operatornameprox_τG_n^*(X)\n\nwhich can also be computed in place of Y.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_primal_prox","page":"Objective","title":"Manopt.get_primal_prox","text":"q = get_primal_prox(M::AbstractManifold, p::AbstractPrimalDualManifoldObjective, σ, p)\nget_primal_prox!(M::AbstractManifold, p::AbstractPrimalDualManifoldObjective, q, σ, p)\n\nEvaluate the proximal map of F stored within AbstractPrimalDualManifoldObjective\n\noperatornameprox_σF(x)\n\nwhich can also be computed in place of y.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.linearized_forward_operator","page":"Objective","title":"Manopt.linearized_forward_operator","text":"Y = linearized_forward_operator(M::AbstractManifold, N::AbstractManifold, apdmo::AbstractPrimalDualManifoldObjective, m, X, n)\nlinearized_forward_operator!(M::AbstractManifold, N::AbstractManifold, Y, apdmo::AbstractPrimalDualManifoldObjective, m, X, n)\n\nEvaluate the linearized operator (differential) DΛ(m)X stored within the AbstractPrimalDualManifoldObjective (in place of Y), where n = Λ(m).\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Constrained-objective","page":"Objective","title":"Constrained objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ConstrainedManifoldObjective","category":"page"},{"location":"plans/objective/#Manopt.ConstrainedManifoldObjective","page":"Objective","title":"Manopt.ConstrainedManifoldObjective","text":"ConstrainedManifoldObjective{T<:AbstractEvaluationType, C<:ConstraintType} <: AbstractManifoldObjective{T}\n\nDescribes the constrained objective\n\nbeginaligned\n operatorname*argmin_p mathcalM f(p)\n textsubject to g_i(p)leq0 quad text for all i=1m\n quad h_j(p)=0 quad text for all j=1n\nendaligned\n\nFields\n\nobjective: an AbstractManifoldObjective representing the unconstrained objective, that is containing cost f, the gradient of the cost f and maybe the Hessian.\nequality_constraints: an AbstractManifoldObjective representing the equality constraints\n\nh mathcal M mathbb R^n also possibly containing its gradient and/or Hessian\n\nequality_constraints: an AbstractManifoldObjective representing the equality constraints\n\nh mathcal M mathbb R^n also possibly containing its gradient and/or Hessian\n\nConstructors\n\nConstrainedManifoldObjective(M::AbstractManifold, f, grad_f;\n g=nothing,\n grad_g=nothing,\n h=nothing,\n grad_h=nothing;\n hess_f=nothing,\n hess_g=nothing,\n hess_h=nothing,\n equality_constraints=nothing,\n inequality_constraints=nothing,\n evaluation=AllocatingEvaluation(),\n M = nothing,\n p = isnothing(M) ? nothing : rand(M),\n)\n\nGenerate the constrained objective based on all involved single functions f, grad_f, g, grad_g, h, grad_h, and optionally a Hessian for each of these. With equality_constraints and inequality_constraints you have to provide the dimension of the ranges of h and g, respectively. You can also provide a manifold M and a point p to use one evaluation of the constraints to automatically try to determine these sizes.\n\nConstrainedManifoldObjective(M::AbstractManifold, mho::AbstractManifoldObjective;\n equality_constraints = nothing,\n inequality_constraints = nothing\n)\n\nGenerate the constrained objective either with explicit constraints g and h, and their gradients, or in the form where these are already encapsulated in VectorGradientFunctions.\n\nBoth variants require that at least one of the constraints (and its gradient) is provided. If any of the three parts provides a Hessian, the corresponding object, that is a ManifoldHessianObjective for f or a VectorHessianFunction for g or h, respectively, is created.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"It might be beneficial to use the adapted problem to specify different ranges for the gradients of the constraints","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"ConstrainedManoptProblem","category":"page"},{"location":"plans/objective/#Manopt.ConstrainedManoptProblem","page":"Objective","title":"Manopt.ConstrainedManoptProblem","text":"ConstrainedProblem{\n TM <: AbstractManifold,\n O <: AbstractManifoldObjective\n HR<:Union{AbstractPowerRepresentation,Nothing},\n GR<:Union{AbstractPowerRepresentation,Nothing},\n HHR<:Union{AbstractPowerRepresentation,Nothing},\n GHR<:Union{AbstractPowerRepresentation,Nothing},\n} <: AbstractManoptProblem{TM}\n\nA constrained problem might feature different ranges for the (vectors of) gradients of the equality and inequality constraints.\n\nThe ranges are required in a few places to allocate memory and access elements correctly, they work as follows:\n\nAssume the objective is\n\nbeginaligned\n operatorname*argmin_p mathcalM f(p)\n textsubject to g_i(p)leq0 quad text for all i=1m\n quad h_j(p)=0 quad text for all j=1n\nendaligned\n\nthen the gradients can (classically) be considered as vectors of the components gradients, for example bigl(operatornamegrad g_1(p) operatornamegrad g_2(p) operatornamegrad g_m(p) bigr).\n\nIn another interpretation, this can be considered a point on the tangent space at P = (pp) in mathcal M^m, so in the tangent space to the PowerManifold mathcal M^m. The case where this is a NestedPowerRepresentation this agrees with the interpretation from before, but on power manifolds, more efficient representations exist.\n\nTo then access the elements, the range has to be specified. That is what this problem is for.\n\nConstructor\n\nConstrainedManoptProblem(\n M::AbstractManifold,\n co::ConstrainedManifoldObjective;\n range=NestedPowerRepresentation(),\n gradient_equality_range=range,\n gradient_inequality_range=range\n hessian_equality_range=range,\n hessian_inequality_range=range\n)\n\nCreates a constrained Manopt problem specifying an AbstractPowerRepresentation for both the gradient_equality_range and the gradient_inequality_range, respectively.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"as well as the helper functions","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractConstrainedFunctor\nAbstractConstrainedSlackFunctor\nLagrangianCost\nLagrangianGradient\nLagrangianHessian","category":"page"},{"location":"plans/objective/#Manopt.AbstractConstrainedFunctor","page":"Objective","title":"Manopt.AbstractConstrainedFunctor","text":"AbstractConstrainedFunctor{T}\n\nA common supertype for fucntors that model constraint functions.\n\nThis supertype provides access for the fields λ and μ, the dual variables of constraintsnof type T.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.AbstractConstrainedSlackFunctor","page":"Objective","title":"Manopt.AbstractConstrainedSlackFunctor","text":"AbstractConstrainedSlackFunctor{T,R}\n\nA common supertype for fucntors that model constraint functions with slack.\n\nThis supertype additionally provides access for the fields\n\nμ::T the dual for the inequality constraints\ns::T the slack parametyer, and\nβ::R the the barrier parameter\n\nwhich is also of typee T.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.LagrangianCost","page":"Objective","title":"Manopt.LagrangianCost","text":"LagrangianCost{CO,T} <: AbstractConstrainedFunctor{T}\n\nImplement the Lagrangian of a ConstrainedManifoldObjective co.\n\nmathcal L(p μ λ)\n= f(p) + sum_i=1^m μ_ig_i(p) + sum_j=1^n λ_jh_j(p)\n\nFields\n\nco::CO, μ::T, λ::T as mentioned, where T represents a vector type.\n\nConstructor\n\nLagrangianCost(co, μ, λ)\n\nCreate a functor for the Lagrangian with fixed dual variables.\n\nExample\n\nWhen you directly want to evaluate the Lagrangian mathcal L you can also call\n\nLagrangianCost(co, μ, λ)(M,p)\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.LagrangianGradient","page":"Objective","title":"Manopt.LagrangianGradient","text":"LagrangianGradient{CO,T}\n\nThe gradient of the Lagrangian of a ConstrainedManifoldObjective co with respect to the variable p. The formula reads\n\noperatornamegrad_p mathcal L(p μ λ)\n= operatornamegrad f(p) + sum_i=1^m μ_i operatornamegrad g_i(p) + sum_j=1^n λ_j operatornamegrad h_j(p)\n\nFields\n\nco::CO, μ::T, λ::T as mentioned, where T represents a vector type.\n\nConstructor\n\nLagrangianGradient(co, μ, λ)\n\nCreate a functor for the Lagrangian with fixed dual variables.\n\nExample\n\nWhen you directly want to evaluate the gradient of the Lagrangian operatornamegrad_p mathcal L you can also call LagrangianGradient(co, μ, λ)(M,p) or LagrangianGradient(co, μ, λ)(M,X,p) for the in-place variant.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.LagrangianHessian","page":"Objective","title":"Manopt.LagrangianHessian","text":"LagrangianHessian{CO, V, T}\n\nThe Hesian of the Lagrangian of a ConstrainedManifoldObjective co with respect to the variable p. The formula reads\n\noperatornameHess_p mathcal L(p μ λ)X\n= operatornameHess f(p) + sum_i=1^m μ_i operatornameHess g_i(p)X + sum_j=1^n λ_j operatornameHess h_j(p)X\n\nFields\n\nco::CO, μ::T, λ::T as mentioned, where T represents a vector type.\n\nConstructor\n\nLagrangianHessian(co, μ, λ)\n\nCreate a functor for the Lagrangian with fixed dual variables.\n\nExample\n\nWhen you directly want to evaluate the Hessian of the Lagrangian operatornameHess_p mathcal L you can also call LagrangianHessian(co, μ, λ)(M, p, X) or LagrangianHessian(co, μ, λ)(M, Y, p, X) for the in-place variant.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-7","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"equality_constraints_length\ninequality_constraints_length\nget_unconstrained_objective\nget_equality_constraint\nget_inequality_constraint\nget_grad_equality_constraint\nget_grad_inequality_constraint\nget_hess_equality_constraint\nget_hess_inequality_constraint\nis_feasible","category":"page"},{"location":"plans/objective/#Manopt.equality_constraints_length","page":"Objective","title":"Manopt.equality_constraints_length","text":"equality_constraints_length(co::ConstrainedManifoldObjective)\n\nReturn the number of equality constraints of an ConstrainedManifoldObjective. This acts transparently through AbstractDecoratedManifoldObjectives\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.inequality_constraints_length","page":"Objective","title":"Manopt.inequality_constraints_length","text":"inequality_constraints_length(cmo::ConstrainedManifoldObjective)\n\nReturn the number of inequality constraints of an ConstrainedManifoldObjective cmo. This acts transparently through AbstractDecoratedManifoldObjectives\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_unconstrained_objective","page":"Objective","title":"Manopt.get_unconstrained_objective","text":"get_unconstrained_objective(co::ConstrainedManifoldObjective)\n\nReturns the internally stored unconstrained AbstractManifoldObjective within the ConstrainedManifoldObjective.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_equality_constraint","page":"Objective","title":"Manopt.get_equality_constraint","text":"get_equality_constraint(amp::AbstractManoptProblem, p, j=:)\nget_equality_constraint(M::AbstractManifold, objective, p, j=:)\n\nEvaluate equality constraints of a ConstrainedManifoldObjective objective at point p and indices j (by default : which corresponds to all indices).\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_inequality_constraint","page":"Objective","title":"Manopt.get_inequality_constraint","text":"get_inequality_constraint(amp::AbstractManoptProblem, p, j=:)\nget_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())\n\nEvaluate inequality constraints of a ConstrainedManifoldObjective objective at point p and indices j (by default : which corresponds to all indices).\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_grad_equality_constraint","page":"Objective","title":"Manopt.get_grad_equality_constraint","text":"get_grad_equality_constraint(amp::AbstractManoptProblem, p, j)\nget_grad_equality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())\nget_grad_equality_constraint!(amp::AbstractManoptProblem, X, p, j)\nget_grad_equality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())\n\nEvaluate the gradient or gradients of the equality constraint (operatornamegrad h(p))_j or operatornamegrad h_j(p),\n\nSee also the ConstrainedManoptProblem to specify the range of the gradient.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_grad_inequality_constraint","page":"Objective","title":"Manopt.get_grad_inequality_constraint","text":"get_grad_inequality_constraint(amp::AbstractManoptProblem, p, j=:)\nget_grad_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())\nget_grad_inequality_constraint!(amp::AbstractManoptProblem, X, p, j=:)\nget_grad_inequality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())\n\nEvaluate the gradient or gradients of the inequality constraint (operatornamegrad g(p))_j or operatornamegrad g_j(p),\n\nSee also the ConstrainedManoptProblem to specify the range of the gradient.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_hess_equality_constraint","page":"Objective","title":"Manopt.get_hess_equality_constraint","text":"get_hess_equality_constraint(amp::AbstractManoptProblem, p, j=:)\nget_hess_equality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())\nget_hess_equality_constraint!(amp::AbstractManoptProblem, X, p, j=:)\nget_hess_equality_constraint!(M::AbstractManifold, X, co::ConstrainedManifoldObjective, p, j, range=NestedPowerRepresentation())\n\nEvaluate the Hessian or Hessians of the equality constraint (operatornameHess h(p))_j or operatornameHess h_j(p),\n\nSee also the ConstrainedManoptProblem to specify the range of the Hessian.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_hess_inequality_constraint","page":"Objective","title":"Manopt.get_hess_inequality_constraint","text":"get_hess_inequality_constraint(amp::AbstractManoptProblem, p, X, j=:)\nget_hess_inequality_constraint(M::AbstractManifold, co::ConstrainedManifoldObjective, p, j=:, range=NestedPowerRepresentation())\nget_hess_inequality_constraint!(amp::AbstractManoptProblem, Y, p, j=:)\nget_hess_inequality_constraint!(M::AbstractManifold, Y, co::ConstrainedManifoldObjective, p, X, j=:, range=NestedPowerRepresentation())\n\nEvaluate the Hessian or Hessians of the inequality constraint (operatornameHess g(p)X)_j or operatornameHess g_j(p)X,\n\nSee also the ConstrainedManoptProblem to specify the range of the Hessian.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.is_feasible","page":"Objective","title":"Manopt.is_feasible","text":"is_feasible(M::AbstractManifold, cmo::ConstrainedManifoldObjective, p, kwargs...)\n\nEvaluate whether a boint p on M is feasible with respect to the ConstrainedManifoldObjective cmo. That is for the provided inequality constaints g mathcal M ℝ^m and equality constaints h mathcal M to ℝ^m from within cmo, the point p mathcal M is feasible if\n\ng_i(p) 0 text for all i=1mquadtext and quad h_j(p) = 0 text for all j=1n\n\nKeyword arguments\n\ncheck_point::Bool=true: whether to also verify that `p∈\\mathcal M holds, using is_point\nerror::Symbol=:none: if the point is not feasible, this symbol determines how to report the error.\n:error: throws an error\n:info: displays the error message as an @info\n:none: (default) the function just returns true/false\n:warn: displays the error message as a @warning.\n\nThe keyword error= and all other kwargs... are passed on to is_point if the point is verfied (see check_point).\n\nAll other keywords are passed on to is_poi\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Internal-functions","page":"Objective","title":"Internal functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.get_feasibility_status","category":"page"},{"location":"plans/objective/#Manopt.get_feasibility_status","page":"Objective","title":"Manopt.get_feasibility_status","text":"get_feasibility_status(\n M::AbstractManifold,\n cmo::ConstrainedManifoldObjective,\n g = get_inequality_constraints(M, cmo, p),\n h = get_equality_constraints(M, cmo, p),\n)\n\nGenerate a message about the feasibiliy of p with respect to the ConstrainedManifoldObjective. You can also provide the evaluated vectors for the values of g and h as keyword arguments, in case you had them evaluated before.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Vectorial-objectives","page":"Objective","title":"Vectorial objectives","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.AbstractVectorFunction\nManopt.AbstractVectorGradientFunction\nManopt.VectorGradientFunction\nManopt.VectorHessianFunction","category":"page"},{"location":"plans/objective/#Manopt.AbstractVectorFunction","page":"Objective","title":"Manopt.AbstractVectorFunction","text":"AbstractVectorFunction{E, FT} <: Function\n\nRepresent an abstract vectorial function fmathcal M ℝ^n with an AbstractEvaluationType E and an AbstractVectorialType to specify the format f is implemented as.\n\nRepresentations of f\n\nThere are three different representations of f, which might be beneficial in one or the other situation:\n\nthe FunctionVectorialType,\nthe ComponentVectorialType,\nthe CoordinateVectorialType with respect to a specific basis of the tangent space.\n\nFor the ComponentVectorialType imagine that f could also be written using its component functions,\n\nf(p) = bigl( f_1(p) f_2(p) ldots f_n(p) bigr)^mathrmT\n\nIn this representation f is given as a vector [f1(M,p), f2(M,p), ..., fn(M,p)] of its component functions. An advantage is that the single components can be evaluated and from this representation one even can directly read of the number n. A disadvantage might be, that one has to implement a lot of individual (component) functions.\n\nFor the FunctionVectorialType f is implemented as a single function f(M, p), that returns an AbstractArray. And advantage here is, that this is a single function. A disadvantage might be, that if this is expensive even to compute a single component, all of f has to be evaluated\n\nFor the ComponentVectorialType of f, each of the component functions is a (classical) objective.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.AbstractVectorGradientFunction","page":"Objective","title":"Manopt.AbstractVectorGradientFunction","text":"VectorGradientFunction{E, FT, JT, F, J, I} <: AbstractManifoldObjective{E}\n\nRepresent an abstract vectorial function fmathcal M ℝ^n that provides a (component wise) gradient. The AbstractEvaluationType E indicates the evaluation type, and the AbstractVectorialTypes FT and JT the formats in which the function and the gradient are provided, see AbstractVectorFunction for an explanation.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.VectorGradientFunction","page":"Objective","title":"Manopt.VectorGradientFunction","text":"VectorGradientFunction{E, FT, JT, F, J, I} <: AbstractVectorGradientFunction{E, FT, JT}\n\nRepresent a function fmathcal M ℝ^n including it first derivative, either as a vector of gradients of a Jacobian\n\nAnd hence has a gradient oepratornamegrad f_i(p) T_pmathcal M. Putting these gradients into a vector the same way as the functions, yields a ComponentVectorialType\n\noperatornamegrad f(p) = Bigl( operatornamegrad f_1(p) operatornamegrad f_2(p) operatornamegrad f_n(p) Bigr)^mathrmT\n (T_pmathcal M)^n\n\nAnd advantage here is, that again the single components can be evaluated individually\n\nFields\n\nvalue!!: the cost function f, which can take different formats\ncost_type: indicating / string data for the type of f\njacobian!!: the Jacobian of f\njacobian_type: indicating / storing data for the type of J_f\nparameters: the number n from, the size of the vector f returns.\n\nConstructor\n\nVectorGradientFunction(f, Jf, range_dimension;\n evaluation::AbstractEvaluationType=AllocatingEvaluation(),\n function_type::AbstractVectorialType=FunctionVectorialType(),\n jacobian_type::AbstractVectorialType=FunctionVectorialType(),\n)\n\nCreate a VectorGradientFunction of f and its Jacobian (vector of gradients) Jf, where f maps into the Euclidean space of dimension range_dimension. Their types are specified by the function_type, and jacobian_type, respectively. The Jacobian can further be given as an allocating variant or an in-place variant, specified by the evaluation= keyword.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.VectorHessianFunction","page":"Objective","title":"Manopt.VectorHessianFunction","text":"VectorHessianFunction{E, FT, JT, HT, F, J, H, I} <: AbstractVectorGradientFunction{E, FT, JT}\n\nRepresent a function fmathcal M ℝ^n including it first derivative, either as a vector of gradients of a Jacobian, and the Hessian, as a vector of Hessians of the component functions.\n\nBoth the Jacobian and the Hessian can map into either a sequence of tangent spaces or a single tangent space of the power manifold of lenth n.\n\nFields\n\nvalue!!: the cost function f, which can take different formats\ncost_type: indicating / string data for the type of f\njacobian!!: the Jacobian of f\njacobian_type: indicating / storing data for the type of J_f\nhessians!!: the Hessians of f (in a component wise sense)\nhessian_type: indicating / storing data for the type of H_f\nparameters: the number n from, the size of the vector f returns.\n\nConstructor\n\nVectorGradientFunction(f, Jf, Hess_f, range_dimension;\n evaluation::AbstractEvaluationType=AllocatingEvaluation(),\n function_type::AbstractVectorialType=FunctionVectorialType(),\n jacobian_type::AbstractVectorialType=FunctionVectorialType(),\n hessian_type::AbstractVectorialType=FunctionVectorialType(),\n)\n\nCreate a VectorGradientFunction of f and its Jacobian (vector of gradients) Jf and (vector of) Hessians, where f maps into the Euclidean space of dimension range_dimension. Their types are specified by the function_type, and jacobian_type, and hessian_type, respectively. The Jacobian and Hessian can further be given as an allocating variant or an inplace-variant, specified by the evaluation= keyword.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.AbstractVectorialType\nManopt.CoordinateVectorialType\nManopt.ComponentVectorialType\nManopt.FunctionVectorialType","category":"page"},{"location":"plans/objective/#Manopt.AbstractVectorialType","page":"Objective","title":"Manopt.AbstractVectorialType","text":"AbstractVectorialType\n\nAn abstract type for different representations of a vectorial function f mathcal M mathbb R^m and its (component-wise) gradient/Jacobian\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.CoordinateVectorialType","page":"Objective","title":"Manopt.CoordinateVectorialType","text":"CoordinateVectorialType{B<:AbstractBasis} <: AbstractVectorialType\n\nA type to indicate that gradient of the constraints is implemented as a Jacobian matrix with respect to a certain basis, that is if the constraints are given as g mathcal M ℝ^m with respect to a basis mathcal B of T_pmathcal M, at p mathcal M This can be written as J_g(p) = (c_1^mathrmTc_m^mathrmT)^mathrmT in ℝ^md, that is, every row c_i of this matrix is a set of coefficients such that get_coefficients(M, p, c, B) is the tangent vector oepratornamegrad g_i(p)\n\nfor example g_i(p) ℝ^m or operatornamegrad g_i(p) T_pmathcal M, i=1m.\n\nFields\n\nbasis an AbstractBasis to indicate the default representation.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.ComponentVectorialType","page":"Objective","title":"Manopt.ComponentVectorialType","text":"ComponentVectorialType <: AbstractVectorialType\n\nA type to indicate that constraints are implemented as component functions, for example g_i(p) ℝ^m or operatornamegrad g_i(p) T_pmathcal M, i=1m.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Manopt.FunctionVectorialType","page":"Objective","title":"Manopt.FunctionVectorialType","text":"FunctionVectorialType <: AbstractVectorialType\n\nA type to indicate that constraints are implemented one whole functions, for example g(p) ℝ^m or operatornamegrad g(p) (T_pmathcal M)^m.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-8","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.get_value\nManopt.get_value_function\nBase.length(::VectorGradientFunction)","category":"page"},{"location":"plans/objective/#Manopt.get_value","page":"Objective","title":"Manopt.get_value","text":"get_value(M::AbstractManifold, vgf::AbstractVectorFunction, p[, i=:])\n\nEvaluate the vector function VectorGradientFunction vgf at p. The range can be used to specify a potential range, but is currently only present for consistency.\n\nThe i can be a linear index, you can provide\n\na single integer\na UnitRange to specify a range to be returned like 1:3\na BitVector specifying a selection\na AbstractVector{<:Integer} to specify indices\n: to return the vector of all gradients, which is also the default\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_value_function","page":"Objective","title":"Manopt.get_value_function","text":"get_value_function(vgf::VectorGradientFunction, recursive=false)\n\nreturn the internally stored function computing get_value.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Base.length-Tuple{VectorGradientFunction}","page":"Objective","title":"Base.length","text":"length(vgf::AbstractVectorFunction)\n\nReturn the length of the vector the function f mathcal M ℝ^n maps into, that is the number n.\n\n\n\n\n\n","category":"method"},{"location":"plans/objective/#Internal-functions-2","page":"Objective","title":"Internal functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt._to_iterable_indices","category":"page"},{"location":"plans/objective/#Manopt._to_iterable_indices","page":"Objective","title":"Manopt._to_iterable_indices","text":"_to_iterable_indices(A::AbstractVector, i)\n\nConvert index i (integer, colon, vector of indices, etc.) for array A into an iterable structure of indices.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Subproblem-objective","page":"Objective","title":"Subproblem objective","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"This objective can be use when the objective of a sub problem solver still needs access to the (outer/main) objective.","category":"page"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"AbstractManifoldSubObjective","category":"page"},{"location":"plans/objective/#Manopt.AbstractManifoldSubObjective","page":"Objective","title":"Manopt.AbstractManifoldSubObjective","text":"AbstractManifoldSubObjective{O<:AbstractManifoldObjective} <: AbstractManifoldObjective\n\nAn abstract type for objectives of sub problems within a solver but still store the original objective internally to generate generic objectives for sub solvers.\n\n\n\n\n\n","category":"type"},{"location":"plans/objective/#Access-functions-9","page":"Objective","title":"Access functions","text":"","category":"section"},{"location":"plans/objective/","page":"Objective","title":"Objective","text":"Manopt.get_objective_cost\nManopt.get_objective_gradient\nManopt.get_objective_hessian\nManopt.get_objective_preconditioner","category":"page"},{"location":"plans/objective/#Manopt.get_objective_cost","page":"Objective","title":"Manopt.get_objective_cost","text":"get_objective_cost(M, amso::AbstractManifoldSubObjective, p)\n\nEvaluate the cost of the (original) objective stored within the sub objective.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_objective_gradient","page":"Objective","title":"Manopt.get_objective_gradient","text":"X = get_objective_gradient(M, amso::AbstractManifoldSubObjective, p)\nget_objective_gradient!(M, X, amso::AbstractManifoldSubObjective, p)\n\nEvaluate the gradient of the (original) objective stored within the sub objective amso.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_objective_hessian","page":"Objective","title":"Manopt.get_objective_hessian","text":"Y = get_objective_Hessian(M, amso::AbstractManifoldSubObjective, p, X)\nget_objective_Hessian!(M, Y, amso::AbstractManifoldSubObjective, p, X)\n\nEvaluate the Hessian of the (original) objective stored within the sub objective amso.\n\n\n\n\n\n","category":"function"},{"location":"plans/objective/#Manopt.get_objective_preconditioner","page":"Objective","title":"Manopt.get_objective_preconditioner","text":"Y = get_objective_preconditioner(M, amso::AbstractManifoldSubObjective, p, X)\nget_objective_preconditioner(M, Y, amso::AbstractManifoldSubObjective, p, X)\n\nEvaluate the Hessian of the (original) objective stored within the sub objective amso.\n\n\n\n\n\n","category":"function"},{"location":"plans/stopping_criteria/#sec-stopping-criteria","page":"Stopping Criteria","title":"Stopping criteria","text":"","category":"section"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"Stopping criteria are implemented as a functor and inherit from the base type","category":"page"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"StoppingCriterion","category":"page"},{"location":"plans/stopping_criteria/#Manopt.StoppingCriterion","page":"Stopping Criteria","title":"Manopt.StoppingCriterion","text":"StoppingCriterion\n\nAn abstract type for the functors representing stopping criteria, so they are callable structures. The naming Scheme follows functions, see for example StopAfterIteration.\n\nEvery StoppingCriterion has to provide a constructor and its function has to have the interface (p,o,i) where a AbstractManoptProblem as well as AbstractManoptSolverState and the current number of iterations are the arguments and returns a boolean whether to stop or not.\n\nBy default each StoppingCriterion should provide a fields reason to provide details when a criterion is met (and that is empty otherwise).\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"They can also be grouped, which is summarized in the type of a set of criteria","category":"page"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"StoppingCriterionSet","category":"page"},{"location":"plans/stopping_criteria/#Manopt.StoppingCriterionSet","page":"Stopping Criteria","title":"Manopt.StoppingCriterionSet","text":"StoppingCriterionGroup <: StoppingCriterion\n\nAn abstract type for a Stopping Criterion that itself consists of a set of Stopping criteria. In total it acts as a stopping criterion itself. Examples are StopWhenAny and StopWhenAll that can be used to combine stopping criteria.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"The stopping criteria s might have certain internal values/fields it uses to verify against. This is done when calling them as a function s(amp::AbstractManoptProblem, ams::AbstractManoptSolverState), where the AbstractManoptProblem and the AbstractManoptSolverState together represent the current state of the solver. The functor returns either false when the stopping criterion is not fulfilled or true otherwise. One field all criteria should have is the s.at_iteration, to indicate at which iteration the stopping criterion (last) indicated to stop. 0 refers to an indication before starting the algorithm, while any negative number meant the stopping criterion is not (yet) fulfilled. To can access a string giving the reason of stopping see get_reason.","category":"page"},{"location":"plans/stopping_criteria/#Generic-stopping-criteria","page":"Stopping Criteria","title":"Generic stopping criteria","text":"","category":"section"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"The following generic stopping criteria are available. Some require that, for example, the corresponding AbstractManoptSolverState have a field gradient when the criterion should access that.","category":"page"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"Further stopping criteria might be available for individual solvers.","category":"page"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"Modules = [Manopt]\nPages = [\"plans/stopping_criterion.jl\"]\nOrder = [:type]\nFilter = t -> t != StoppingCriterion && t != StoppingCriterionSet","category":"page"},{"location":"plans/stopping_criteria/#Manopt.StopAfter","page":"Stopping Criteria","title":"Manopt.StopAfter","text":"StopAfter <: StoppingCriterion\n\nstore a threshold when to stop looking at the complete runtime. It uses time_ns() to measure the time and you provide a Period as a time limit, for example Minute(15).\n\nFields\n\nthreshold stores the Period after which to stop\nstart stores the starting time when the algorithm is started, that is a call with i=0.\ntime stores the elapsed time\nat_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.\n\nConstructor\n\nStopAfter(t)\n\ninitialize the stopping criterion to a Period t to stop after.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopAfterIteration","page":"Stopping Criteria","title":"Manopt.StopAfterIteration","text":"StopAfterIteration <: StoppingCriterion\n\nA functor for a stopping criterion to stop after a maximal number of iterations.\n\nFields\n\nmax_iterations stores the maximal iteration number where to stop at\nat_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.\n\nConstructor\n\nStopAfterIteration(maxIter)\n\ninitialize the functor to indicate to stop after maxIter iterations.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenAll","page":"Stopping Criteria","title":"Manopt.StopWhenAll","text":"StopWhenAll <: StoppingCriterionSet\n\nstore an array of StoppingCriterion elements and indicates to stop, when all indicate to stop. The reason is given by the concatenation of all reasons.\n\nConstructor\n\nStopWhenAll(c::NTuple{N,StoppingCriterion} where N)\nStopWhenAll(c::StoppingCriterion,...)\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenAny","page":"Stopping Criteria","title":"Manopt.StopWhenAny","text":"StopWhenAny <: StoppingCriterionSet\n\nstore an array of StoppingCriterion elements and indicates to stop, when any single one indicates to stop. The reason is given by the concatenation of all reasons (assuming that all non-indicating return \"\").\n\nConstructor\n\nStopWhenAny(c::NTuple{N,StoppingCriterion} where N)\nStopWhenAny(c::StoppingCriterion...)\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenChangeLess","page":"Stopping Criteria","title":"Manopt.StopWhenChangeLess","text":"StopWhenChangeLess <: StoppingCriterion\n\nstores a threshold when to stop looking at the norm of the change of the optimization variable from within a AbstractManoptSolverState s. That ism by accessing get_iterate(s) and comparing successive iterates. For the storage a StoreStateAction is used.\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\nlast_change::Real: the last change recorded in this stopping criterion\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nstorage::StoreStateAction: a storage to access the previous iterate\nat_iteration::Int: indicate at which iteration this stopping criterion was last active.\ninverse_retraction: An AbstractInverseRetractionMethod that can be passed to approximate the distance by this inverse retraction and a norm on the tangent space. This can be used if neither the distance nor the logarithmic map are availannle on M.\nlast_change: store the last change\nstorage: A StoreStateAction to access the previous iterate.\nthreshold: the threshold for the change to check (run under to stop)\nouter_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance. You can deactivate this, but setting this value to missing.\n\nExample\n\nOn an AbstractPowerManifold like mathcal M = mathcal N^n any point p = (p_1p_n) mathcal M is a vector of length n with of points p_i mathcal N. Then, denoting the outer_norm by r, the distance of two points pq mathcal M is given by\n\n\\mathrm{d}(p,q) = \\Bigl( \\sum_{k=1}^n \\mathrm{d}(p_k,q_k)^r \\Bigr)^{\\frac{1}{r}},\n\nwhere the sum turns into a maximum for the case r=. The outer_norm has no effect on manifolds that do not consist of components.\n\nIf the manifold does not have components, the outer norm is ignored.\n\nConstructor\n\nStopWhenChangeLess(\n M::AbstractManifold,\n threshold::Float64;\n storage::StoreStateAction=StoreStateAction([:Iterate]),\n inverse_retraction_method::IRT=default_inverse_retraction_method(M)\n outer_norm::Union{Missing,Real}=missing\n)\n\ninitialize the stopping criterion to a threshold ε using the StoreStateAction a, which is initialized to just store :Iterate by default. You can also provide an inverseretractionmethod for the distance or a manifold to use its default inverse retraction.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenCostLess","page":"Stopping Criteria","title":"Manopt.StopWhenCostLess","text":"StopWhenCostLess <: StoppingCriterion\n\nstore a threshold when to stop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).\n\nConstructor\n\nStopWhenCostLess(ε)\n\ninitialize the stopping criterion to a threshold ε.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenCostNaN","page":"Stopping Criteria","title":"Manopt.StopWhenCostNaN","text":"StopWhenCostNaN <: StoppingCriterion\n\nstop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).\n\nConstructor\n\nStopWhenCostNaN()\n\ninitialize the stopping criterion to NaN.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenEntryChangeLess","page":"Stopping Criteria","title":"Manopt.StopWhenEntryChangeLess","text":"StopWhenEntryChangeLess\n\nEvaluate whether a certain fields change is less than a certain threshold\n\nFields\n\nfield: a symbol addressing the corresponding field in a certain subtype of AbstractManoptSolverState to track\ndistance: a function (problem, state, v1, v2) -> R that computes the distance between two possible values of the field\nstorage: a StoreStateAction to store the previous value of the field\nthreshold: the threshold to indicate to stop when the distance is below this value\n\nInternal fields\n\nat_iteration: store the iteration at which the stop indication happened\n\nstores a threshold when to stop looking at the norm of the change of the optimization variable from within a AbstractManoptSolverState, i.e get_iterate(o). For the storage a StoreStateAction is used\n\nConstructor\n\nStopWhenEntryChangeLess(\n field::Symbol\n distance,\n threshold;\n storage::StoreStateAction=StoreStateAction([field]),\n)\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenGradientChangeLess","page":"Stopping Criteria","title":"Manopt.StopWhenGradientChangeLess","text":"StopWhenGradientChangeLess <: StoppingCriterion\n\nA stopping criterion based on the change of the gradient.\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\nlast_change::Real: the last change recorded in this stopping criterion\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nstorage::StoreStateAction: a storage to access the previous iterate\nthreshold: the threshold for the change to check (run under to stop)\nouter_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance. You can deactivate this, but setting this value to missing.\n\nExample\n\nOn an AbstractPowerManifold like mathcal M = mathcal N^n any point p = (p_1p_n) mathcal M is a vector of length n with of points p_i mathcal N. Then, denoting the outer_norm by r, the norm of the difference of tangent vectors like the last and current gradien XY mathcal M is given by\n\n\\lVert X-Y \\rVert_{p} = \\Bigl( \\sum_{k=1}^n \\lVert X_k-Y_k \\rVert_{p_k}^r \\Bigr)^{\\frac{1}{r}},\n\nwhere the sum turns into a maximum for the case r=. The outer_norm has no effect on manifols, that do not consist of components.\n\nConstructor\n\nStopWhenGradientChangeLess(\n M::AbstractManifold,\n ε::Float64;\n storage::StoreStateAction=StoreStateAction([:Iterate]),\n vector_transport_method::IRT=default_vector_transport_method(M),\n outer_norm::N=missing\n)\n\nCreate a stopping criterion with threshold ε for the change gradient, that is, this criterion indicates to stop when get_gradient is in (norm of) its change less than ε, where vector_transport_method denotes the vector transport mathcal T used.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenGradientNormLess","page":"Stopping Criteria","title":"Manopt.StopWhenGradientNormLess","text":"StopWhenGradientNormLess <: StoppingCriterion\n\nA stopping criterion based on the current gradient norm.\n\nFields\n\nnorm: a function (M::AbstractManifold, p, X) -> ℝ that computes a norm of the gradient X in the tangent space at p on M. For manifolds with components provide(M::AbstractManifold, p, X, r) -> ℝ`.\nthreshold: the threshold to indicate to stop when the distance is below this value\nouter_norm: if M is a manifold with components, this can be used to specify the norm, that is used to compute the overall distance based on the element-wise distance.\n\nInternal fields\n\nlast_change store the last change\nat_iteration store the iteration at which the stop indication happened\n\nExample\n\nOn an AbstractPowerManifold like mathcal M = mathcal N^n any point p = (p_1p_n) mathcal M is a vector of length n with of points p_i mathcal N. Then, denoting the outer_norm by r, the norm of a tangent vector like the current gradient X mathcal M is given by\n\n\\lVert X \\rVert_{p} = \\Bigl( \\sum_{k=1}^n \\lVert X_k \\rVert_{p_k}^r \\Bigr)^{\\frac{1}{r}},\n\nwhere the sum turns into a maximum for the case r=. The outer_norm has no effect on manifolds that do not consist of components.\n\nIf you pass in your individual norm, this can be deactivated on such manifolds by passing missing to outer_norm.\n\nConstructor\n\nStopWhenGradientNormLess(ε; norm=ManifoldsBase.norm, outer_norm=missing)\n\nCreate a stopping criterion with threshold ε for the gradient, that is, this criterion indicates to stop when get_gradient returns a gradient vector of norm less than ε, where the norm to use can be specified in the norm= keyword.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenIterateNaN","page":"Stopping Criteria","title":"Manopt.StopWhenIterateNaN","text":"StopWhenIterateNaN <: StoppingCriterion\n\nstop looking at the cost function of the optimization problem from within a AbstractManoptProblem, i.e get_cost(p,get_iterate(o)).\n\nConstructor\n\nStopWhenIterateNaN()\n\ninitialize the stopping criterion to NaN.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenSmallerOrEqual","page":"Stopping Criteria","title":"Manopt.StopWhenSmallerOrEqual","text":"StopWhenSmallerOrEqual <: StoppingCriterion\n\nA functor for an stopping criterion, where the algorithm if stopped when a variable is smaller than or equal to its minimum value.\n\nFields\n\nvalue stores the variable which has to fall under a threshold for the algorithm to stop\nminValue stores the threshold where, if the value is smaller or equal to this threshold, the algorithm stops\n\nConstructor\n\nStopWhenSmallerOrEqual(value, minValue)\n\ninitialize the functor to indicate to stop after value is smaller than or equal to minValue.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenStepsizeLess","page":"Stopping Criteria","title":"Manopt.StopWhenStepsizeLess","text":"StopWhenStepsizeLess <: StoppingCriterion\n\nstores a threshold when to stop looking at the last step size determined or found during the last iteration from within a AbstractManoptSolverState.\n\nConstructor\n\nStopWhenStepsizeLess(ε)\n\ninitialize the stopping criterion to a threshold ε.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Manopt.StopWhenSubgradientNormLess","page":"Stopping Criteria","title":"Manopt.StopWhenSubgradientNormLess","text":"StopWhenSubgradientNormLess <: StoppingCriterion\n\nA stopping criterion based on the current subgradient norm.\n\nConstructor\n\nStopWhenSubgradientNormLess(ε::Float64)\n\nCreate a stopping criterion with threshold ε for the subgradient, that is, this criterion indicates to stop when get_subgradient returns a subgradient vector of norm less than ε.\n\n\n\n\n\n","category":"type"},{"location":"plans/stopping_criteria/#Functions-for-stopping-criteria","page":"Stopping Criteria","title":"Functions for stopping criteria","text":"","category":"section"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"There are a few functions to update, combine, and modify stopping criteria, especially to update internal values even for stopping criteria already being used within an AbstractManoptSolverState structure.","category":"page"},{"location":"plans/stopping_criteria/","page":"Stopping Criteria","title":"Stopping Criteria","text":"Modules = [Manopt]\nPages = [\"plans/stopping_criterion.jl\"]\nOrder = [:function]","category":"page"},{"location":"plans/stopping_criteria/#Base.:&-Union{Tuple{T}, Tuple{S}, Tuple{S, T}} where {S<:StoppingCriterion, T<:StoppingCriterion}","page":"Stopping Criteria","title":"Base.:&","text":"&(s1,s2)\ns1 & s2\n\nCombine two StoppingCriterion within an StopWhenAll. If either s1 (or s2) is already an StopWhenAll, then s2 (or s1) is appended to the list of StoppingCriterion within s1 (or s2).\n\nExample\n\na = StopAfterIteration(200) & StopWhenChangeLess(M, 1e-6)\nb = a & StopWhenGradientNormLess(1e-6)\n\nIs the same as\n\na = StopWhenAll(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6))\nb = StopWhenAll(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6), StopWhenGradientNormLess(1e-6))\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Base.:|-Union{Tuple{T}, Tuple{S}, Tuple{S, T}} where {S<:StoppingCriterion, T<:StoppingCriterion}","page":"Stopping Criteria","title":"Base.:|","text":"|(s1,s2)\ns1 | s2\n\nCombine two StoppingCriterion within an StopWhenAny. If either s1 (or s2) is already an StopWhenAny, then s2 (or s1) is appended to the list of StoppingCriterion within s1 (or s2)\n\nExample\n\na = StopAfterIteration(200) | StopWhenChangeLess(M, 1e-6)\nb = a | StopWhenGradientNormLess(1e-6)\n\nIs the same as\n\na = StopWhenAny(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6))\nb = StopWhenAny(StopAfterIteration(200), StopWhenChangeLess(M, 1e-6), StopWhenGradientNormLess(1e-6))\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.get_active_stopping_criteria-Tuple{sCS} where sCS<:StoppingCriterionSet","page":"Stopping Criteria","title":"Manopt.get_active_stopping_criteria","text":"get_active_stopping_criteria(c)\n\nreturns all active stopping criteria, if any, that are within a StoppingCriterion c, and indicated a stop, that is their reason is nonempty. To be precise for a simple stopping criterion, this returns either an empty array if no stop is indicated or the stopping criterion as the only element of an array. For a StoppingCriterionSet all internal (even nested) criteria that indicate to stop are returned.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.get_reason-Tuple{AbstractManoptSolverState}","page":"Stopping Criteria","title":"Manopt.get_reason","text":"get_reason(s::AbstractManoptSolverState)\n\nreturn the current reason stored within the StoppingCriterion from within the AbstractManoptSolverState. This reason is empty (\"\") if the criterion has never been met.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.get_stopping_criteria-Tuple{S} where S<:StoppingCriterionSet","page":"Stopping Criteria","title":"Manopt.get_stopping_criteria","text":"get_stopping_criteria(c)\n\nreturn the array of internally stored StoppingCriterions for a StoppingCriterionSet c.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.indicates_convergence-Tuple{StoppingCriterion}","page":"Stopping Criteria","title":"Manopt.indicates_convergence","text":"indicates_convergence(c::StoppingCriterion)\n\nReturn whether (true) or not (false) a StoppingCriterion does always mean that, when it indicates to stop, the solver has converged to a minimizer or critical point.\n\nNote that this is independent of the actual state of the stopping criterion, whether some of them indicate to stop, but a purely type-based, static decision.\n\nExamples\n\nWith s1=StopAfterIteration(20) and s2=StopWhenGradientNormLess(1e-7) the indicator yields\n\nindicates_convergence(s1) is false\nindicates_convergence(s2) is true\nindicates_convergence(s1 | s2) is false, since this might also stop after 20 iterations\nindicates_convergence(s1 & s2) is true, since s2 is fulfilled if this stops.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopAfter, Val{:MaxTime}, Dates.Period}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopAfter, :MaxTime, v::Period)\n\nUpdate the time period after which an algorithm shall stop.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopAfterIteration, Val{:MaxIteration}, Int64}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopAfterIteration, :;MaxIteration, v::Int)\n\nUpdate the number of iterations after which the algorithm should stop.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenChangeLess, Val{:MinIterateChange}, Any}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenChangeLess, :MinIterateChange, v::Int)\n\nUpdate the minimal change below which an algorithm shall stop.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenCostLess, Val{:MinCost}, Any}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenCostLess, :MinCost, v)\n\nUpdate the minimal cost below which the algorithm shall stop\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenEntryChangeLess, Val{:Threshold}, Any}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenEntryChangeLess, :Threshold, v)\n\nUpdate the minimal cost below which the algorithm shall stop\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenGradientChangeLess, Val{:MinGradientChange}, Any}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenGradientChangeLess, :MinGradientChange, v)\n\nUpdate the minimal change below which an algorithm shall stop.\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenGradientNormLess, Val{:MinGradNorm}, Float64}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenGradientNormLess, :MinGradNorm, v::Float64)\n\nUpdate the minimal gradient norm when an algorithm shall stop\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenStepsizeLess, Val{:MinStepsize}, Any}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenStepsizeLess, :MinStepsize, v)\n\nUpdate the minimal step size below which the algorithm shall stop\n\n\n\n\n\n","category":"method"},{"location":"plans/stopping_criteria/#Manopt.set_parameter!-Tuple{StopWhenSubgradientNormLess, Val{:MinSubgradNorm}, Float64}","page":"Stopping Criteria","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenSubgradientNormLess, :MinSubgradNorm, v::Float64)\n\nUpdate the minimal subgradient norm when an algorithm shall stop\n\n\n\n\n\n","category":"method"},{"location":"tutorials/HowToRecord/#How-to-record-data-during-the-iterations","page":"Record values","title":"How to record data during the iterations","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"The recording and debugging features make it possible to record nearly any data during the iterations. This tutorial illustrates how to:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"record one value during the iterations;\nrecord multiple values during the iterations and access them afterwards;\nrecord within a subsolver\ndefine an own RecordAction to perform individual recordings.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Several predefined recordings exist, for example RecordCost or RecordGradient, if the problem the solver uses provides a gradient. For fields of the State the recording can also be done RecordEntry. For other recordings, for example more advanced computations before storing a value, an own RecordAction can be defined.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We illustrate these using the gradient descent from the Get started: optimize! tutorial.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Here the focus is put on ways to investigate the behaviour during iterations by using Recording techniques.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Let’s first load the necessary packages.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"using Manopt, Manifolds, Random, ManifoldDiff, LinearAlgebra\nusing ManifoldDiff: grad_distance\nRandom.seed!(42);","category":"page"},{"location":"tutorials/HowToRecord/#The-objective","page":"Record values","title":"The objective","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We generate data and define our cost and gradient:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Random.seed!(42)\nm = 30\nM = Sphere(m)\nn = 800\nσ = π / 8\nx = zeros(Float64, m + 1)\nx[2] = 1.0\ndata = [exp(M, x, σ * rand(M; vector_at=x)) for i in 1:n]\nf(M, p) = sum(1 / (2 * n) * distance.(Ref(M), Ref(p), data) .^ 2)\ngrad_f(M, p) = sum(1 / n * grad_distance.(Ref(M), data, Ref(p)))","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"grad_f (generic function with 1 method)","category":"page"},{"location":"tutorials/HowToRecord/#First-examples","page":"Record values","title":"First examples","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"For the high level interfaces of the solvers, like gradient_descent we have to set return_state to true to obtain the whole solver state and not only the resulting minimizer.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Then we can easily use the record= option to add recorded values. This keyword accepts RecordActions as well as several symbols as shortcuts, for example :Cost to record the cost, or if your options have a field f, :f would record that entry. An overview of the symbols that can be used is given here.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We first just record the cost after every iteration","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"R = gradient_descent(M, f, grad_f, data[1]; record=:Cost, return_state=true)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 58 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-8: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Record\n(Iteration = RecordCost(),)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"From the returned state, we see that the GradientDescentState are encapsulated (decorated) within a RecordSolverState.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"For such a state, one can attach different recorders to some operations, currently to :Start. :Stop, and :Iteration, where :Iteration is the default when using the record= keyword with a RecordAction or a Symbol as we just did. We can access all values recorded during the iterations by calling get_record(R, :Iteation) or since this is the default even shorter","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(R)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"58-element Vector{Float64}:\n 0.6870172325261714\n 0.6239221496686211\n 0.5900244338953802\n 0.569312079535616\n 0.551804825865545\n 0.5429045359832491\n 0.5383847696671529\n 0.5360322830268692\n 0.5348144739486789\n 0.5341773307679919\n 0.5338452512001082\n 0.5336712822308554\n 0.533580331120935\n ⋮\n 0.5334801024530476\n 0.5334801024530282\n 0.5334801024530178\n 0.5334801024530125\n 0.5334801024530096\n 0.5334801024530081\n 0.5334801024530073\n 0.5334801024530066\n 0.5334801024530061\n 0.5334801024530059\n 0.5334801024530059\n 0.5334801024530059","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"To record more than one value, you can pass an array of a mix of symbols and RecordActions which formally introduces RecordGroup. Such a group records a tuple of values in every iteration:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"R2 = gradient_descent(M, f, grad_f, data[1]; record=[:Iteration, :Cost], return_state=true)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 58 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-8: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Record\n(Iteration = RecordGroup([RecordIteration(), RecordCost()]),)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Here, the symbol :Cost is mapped to using the RecordCost action. The same holds for :Iteration obviously records the current iteration number i. To access these you can first extract the group of records (that is where the :Iterations are recorded; note the plural) and then access the :Cost ““”","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record_action(R2, :Iteration)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"RecordGroup([RecordIteration(), RecordCost()])","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Since iteration is the default, we can also omit it here again. To access single recorded values, one can use","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record_action(R2)[:Cost]","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"58-element Vector{Float64}:\n 0.6870172325261714\n 0.6239221496686211\n 0.5900244338953802\n 0.569312079535616\n 0.551804825865545\n 0.5429045359832491\n 0.5383847696671529\n 0.5360322830268692\n 0.5348144739486789\n 0.5341773307679919\n 0.5338452512001082\n 0.5336712822308554\n 0.533580331120935\n ⋮\n 0.5334801024530476\n 0.5334801024530282\n 0.5334801024530178\n 0.5334801024530125\n 0.5334801024530096\n 0.5334801024530081\n 0.5334801024530073\n 0.5334801024530066\n 0.5334801024530061\n 0.5334801024530059\n 0.5334801024530059\n 0.5334801024530059","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"This can be also done by using a the high level interface get_record","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(R2, :Iteration, :Cost)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"58-element Vector{Float64}:\n 0.6870172325261714\n 0.6239221496686211\n 0.5900244338953802\n 0.569312079535616\n 0.551804825865545\n 0.5429045359832491\n 0.5383847696671529\n 0.5360322830268692\n 0.5348144739486789\n 0.5341773307679919\n 0.5338452512001082\n 0.5336712822308554\n 0.533580331120935\n ⋮\n 0.5334801024530476\n 0.5334801024530282\n 0.5334801024530178\n 0.5334801024530125\n 0.5334801024530096\n 0.5334801024530081\n 0.5334801024530073\n 0.5334801024530066\n 0.5334801024530061\n 0.5334801024530059\n 0.5334801024530059\n 0.5334801024530059","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Note that the first symbol again refers to the point where we record (not to the thing we record). We can also pass a tuple as second argument to have our own order within the tuples returned. Switching the order of recorded cost and Iteration can be done using ““”","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(R2, :Iteration, (:Iteration, :Cost))","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"58-element Vector{Tuple{Int64, Float64}}:\n (1, 0.6870172325261714)\n (2, 0.6239221496686211)\n (3, 0.5900244338953802)\n (4, 0.569312079535616)\n (5, 0.551804825865545)\n (6, 0.5429045359832491)\n (7, 0.5383847696671529)\n (8, 0.5360322830268692)\n (9, 0.5348144739486789)\n (10, 0.5341773307679919)\n (11, 0.5338452512001082)\n (12, 0.5336712822308554)\n (13, 0.533580331120935)\n ⋮\n (47, 0.5334801024530476)\n (48, 0.5334801024530282)\n (49, 0.5334801024530178)\n (50, 0.5334801024530125)\n (51, 0.5334801024530096)\n (52, 0.5334801024530081)\n (53, 0.5334801024530073)\n (54, 0.5334801024530066)\n (55, 0.5334801024530061)\n (56, 0.5334801024530059)\n (57, 0.5334801024530059)\n (58, 0.5334801024530059)","category":"page"},{"location":"tutorials/HowToRecord/#A-more-complex-example","page":"Record values","title":"A more complex example","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"To illustrate a complicated example let’s record:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"the iteration number, cost and gradient field, but only every sixth iteration;\nthe iteration at which we stop.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We first generate the problem and the state, to also illustrate the low-level works when not using the high-level interface gradient_descent.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"p = DefaultManoptProblem(M, ManifoldGradientObjective(f, grad_f))\ns = GradientDescentState(\n M;\n p=copy(data[1]),\n stopping_criterion=StopAfterIteration(200) | StopWhenGradientNormLess(10.0^-9),\n)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-9: not reached\nOverall: not reached\nThis indicates convergence: No","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We now first build a RecordGroup to group the three entries we want to record per iteration. We then put this into a RecordEvery to only record this every sixth iteration","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"rI = RecordEvery(\n RecordGroup([\n RecordIteration() => :Iteration,\n RecordCost() => :Cost,\n RecordEntry(similar(data[1]), :X) => :Gradient,\n ]),\n 6,\n)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"RecordEvery(RecordGroup([RecordIteration(), RecordCost(), RecordEntry(:X)]), 6, true)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"where the notation as a pair with the symbol can be read as “Is accessible by”. The record= keyword with the symbol :Iteration is actually the same as we specified here for the first group entry. For recording the final iteration number","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"sI = RecordIteration()","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"RecordIteration()","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We now combine both into the RecordSolverState decorator. It acts completely the same as any AbstractManoptSolverState but records something in every iteration additionally. This is stored in a dictionary of RecordActions, where :Iteration is the action (here the only every sixth iteration group) and the sI which is executed at stop.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Note that the keyword record= in the high level interface gradient_descent only would fill the :Iteration symbol of said dictionary, but we could also pass pairs like in the following, that is in the form Symbol => RecordAction into that keyword to obtain the same as in","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"r = RecordSolverState(s, Dict(:Iteration => rI, :Stop => sI))","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-9: not reached\nOverall: not reached\nThis indicates convergence: No\n\n## Record\n(Iteration = RecordEvery(RecordGroup([RecordIteration(), RecordCost(), RecordEntry(:X)]), 6, true), Stop = RecordIteration())","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We now call the solver","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"res = solve!(p, r)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 63 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-9: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Record\n(Iteration = RecordEvery(RecordGroup([RecordIteration(), RecordCost(), RecordEntry(:X)]), 6, true), Stop = RecordIteration())","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"And we can look at the recorded value at :Stop to see how many iterations were performed","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(res, :Stop)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"1-element Vector{Int64}:\n 63","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"and the other values during the iterations are","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(res, :Iteration, (:Iteration, :Cost))","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"10-element Vector{Tuple{Int64, Float64}}:\n (6, 0.5429045359832491)\n (12, 0.5336712822308554)\n (18, 0.5334840986243338)\n (24, 0.5334801877032023)\n (30, 0.5334801043129838)\n (36, 0.5334801024945817)\n (42, 0.5334801024539585)\n (48, 0.5334801024530282)\n (54, 0.5334801024530066)\n (60, 0.5334801024530057)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"where the last tuple contains the names from the pairs when we generated the record group. So similarly we can use :Gradient as specified before to access the recorded gradient.","category":"page"},{"location":"tutorials/HowToRecord/#Recording-from-a-Subsolver","page":"Record values","title":"Recording from a Subsolver","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"One can also record from a subsolver. For that we need a problem that actually requires a subsolver. We take the constraint example from the How to print debug tutorial. Maybe read that part for more details on the problem","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"d = 4\nM2 = Sphere(d - 1)\nv0 = project(M2, [ones(2)..., zeros(d - 2)...])\nZ = v0 * v0'\n#Cost and gradient\nf2(M, p) = -tr(transpose(p) * Z * p) / 2\ngrad_f2(M, p) = project(M, p, -transpose.(Z) * p / 2 - Z * p / 2)\n# Constraints\ng(M, p) = -p # now p ≥ 0\nmI = -Matrix{Float64}(I, d, d)\n# Vector of gradients of the constraint components\ngrad_g(M, p) = [project(M, p, mI[:, i]) for i in 1:d]\np0 = project(M2, [ones(2)..., zeros(d - 3)..., 0.1])","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We directly start with recording the subsolvers Iteration. We can specify what to record in the subsolver using the sub_kwargs keyword argument with a Symbol => value pair. Here we specify to record the iteration and the cost in every subsolvers step.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Furthermore, we have to “collect” this recording after every sub solver run. This is done with the :Subsolver keyword in the main record= keyword.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"s1 = exact_penalty_method(\n M2,\n f2,\n grad_f2,\n p0;\n g = g,\n grad_g = grad_g,\n record = [:Iteration, :Cost, :Subsolver],\n sub_kwargs = [:record => [:Iteration, :Cost]],\n return_state=true,\n);","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Then the first entry of the record contains the iterate, the (main solvers) cost, and the third entry is the recording of the subsolver.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(s1)[1]","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"(1, -0.4733019623455375, [(1, -0.4288382393589549), (2, -0.43669534259556914), (3, -0.4374036673499917), (4, -0.43744087180862923)])","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"When adding a number to not record on every iteration, the :Subsolver keyword of course still also only “copies over” the subsolver recordings when active. But one could avoid allocations on the other runs. This is done, by specifying the sub solver as :WhenActive","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"s2 = exact_penalty_method(\n M2,\n f2,\n grad_f2,\n p0;\n g = g,\n grad_g = grad_g,\n record = [:Iteration, :Cost, :Subsolver, 25],\n sub_kwargs = [:record => [:Iteration, :Cost, :WhenActive]],\n return_state=true,\n);","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Then","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(s2)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"4-element Vector{Tuple{Int64, Float64, Vector{Tuple{Int64, Float64}}}}:\n (25, -0.4994494108530985, [(1, -0.4991469152295235)])\n (50, -0.49999564261147317, [(1, -0.49999366842932896)])\n (75, -0.49999997420136083, [(1, -0.4999999614701454)])\n (100, -0.4999999998337046, [(1, -0.49999999981081666)])","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Finally, instead of recording iterations, we can also specify to record the stopping criterion and final cost by adding that to :Stop of the sub solvers record. Then we can specify, as usual in a tuple, that the :Subsolver should record :Stop (by default it takes over :Iteration)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"s3 = exact_penalty_method(\n M2,\n f2,\n grad_f2,\n p0;\n g = g,\n grad_g = grad_g,\n record = [:Iteration, :Cost, (:Subsolver, :Stop), 25],\n sub_kwargs = [:record => [:Stop => [:Stop, :Cost]]],\n return_state=true,\n);","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Then the following displays also the reasons why each of the recorded subsolvers stopped and the corresponding cost","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(s3)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"4-element Vector{Tuple{Int64, Float64, Vector{Tuple{String, Float64}}}}:\n (25, -0.4994494108530985, [(\"The algorithm reached approximately critical point after 1 iterations; the gradient norm (0.00031307624887101047) is less than 0.001.\\n\", -0.4991469152295235)])\n (50, -0.49999564261147317, [(\"The algorithm reached approximately critical point after 1 iterations; the gradient norm (0.0009767910400237622) is less than 0.001.\\n\", -0.49999366842932896)])\n (75, -0.49999997420136083, [(\"The algorithm reached approximately critical point after 1 iterations; the gradient norm (0.0002239629119661262) is less than 0.001.\\n\", -0.4999999614701454)])\n (100, -0.4999999998337046, [(\"The algorithm reached approximately critical point after 1 iterations; the gradient norm (3.8129640908105967e-6) is less than 0.001.\\n\", -0.49999999981081666)])","category":"page"},{"location":"tutorials/HowToRecord/#Writing-an-own-[RecordAction](https://manoptjl.org/stable/plans/record/#Manopt.RecordAction)s","page":"Record values","title":"Writing an own RecordActions","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Let’s investigate where we want to count the number of function evaluations, again just to illustrate, since for the gradient this is just one evaluation per iteration. We first define a cost, that counts its own calls.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"mutable struct MyCost{T}\n data::T\n count::Int\nend\nMyCost(data::T) where {T} = MyCost{T}(data, 0)\nfunction (c::MyCost)(M, x)\n c.count += 1\n return sum(1 / (2 * length(c.data)) * distance.(Ref(M), Ref(x), c.data) .^ 2)\nend","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"and we define an own, new RecordAction, which is a functor, that is a struct that is also a function. The function we have to implement is similar to a single solver step in signature, since it might get called every iteration:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"mutable struct RecordCount <: RecordAction\n recorded_values::Vector{Int}\n RecordCount() = new(Vector{Int}())\nend\nfunction (r::RecordCount)(p::AbstractManoptProblem, ::AbstractManoptSolverState, i)\n if i > 0\n push!(r.recorded_values, Manopt.get_cost_function(get_objective(p)).count)\n elseif i < 0 # reset if negative\n r.recorded_values = Vector{Int}()\n end\nend","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Now we can initialize the new cost and call the gradient descent. Note that this illustrates also the last use case since you can pass symbol-action pairs into the record=array.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"f3 = MyCost(data)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Now for the plain gradient descent, we have to modify the step (to a constant stepsize) and remove the default debug verification whether the cost increases (setting debug to []). We also only look at the first 20 iterations to keep this example small in recorded values. We call","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"R3 = gradient_descent(\n M,\n f3,\n grad_f,\n data[1];\n record=[:Iteration => [\n :Iteration,\n RecordCount() => :Count,\n :Cost],\n ],\n stepsize = ConstantLength(1.0),\n stopping_criterion=StopAfterIteration(20),\n debug=[],\n return_state=true,\n)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 20 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nConstantLength(1.0; type=:relative)\n\n## Stopping criterion\n\nMax Iteration 20: reached\nThis indicates convergence: No\n\n## Record\n(Iteration = RecordGroup([RecordIteration(), RecordCount([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]), RecordCost()]),)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"For :Cost we already learned how to access them, the => :Count introduces an action to obtain the :Count symbol as its access. We can again access the whole sets of records","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(R3)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"20-element Vector{Tuple{Int64, Int64, Float64}}:\n (1, 1, 0.5823814423113639)\n (2, 2, 0.540804980234004)\n (3, 3, 0.5345550944722898)\n (4, 4, 0.5336375289938887)\n (5, 5, 0.5335031591890169)\n (6, 6, 0.5334834802310252)\n (7, 7, 0.5334805973984544)\n (8, 8, 0.5334801749902928)\n (9, 9, 0.5334801130855078)\n (10, 10, 0.5334801040117543)\n (11, 11, 0.5334801026815558)\n (12, 12, 0.5334801024865219)\n (13, 13, 0.5334801024579218)\n (14, 14, 0.5334801024537273)\n (15, 15, 0.5334801024531121)\n (16, 16, 0.5334801024530218)\n (17, 17, 0.5334801024530087)\n (18, 18, 0.5334801024530067)\n (19, 19, 0.5334801024530065)\n (20, 20, 0.5334801024530064)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"this is equivalent to calling R[:Iteration]. Note that since we introduced :Count we can also access a single recorded value using","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"R3[:Iteration, :Count]","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"20-element Vector{Int64}:\n 1\n 2\n 3\n 4\n 5\n 6\n 7\n 8\n 9\n 10\n 11\n 12\n 13\n 14\n 15\n 16\n 17\n 18\n 19\n 20","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"and we see that the cost function is called once per iteration.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"If we use this counting cost and run the default gradient descent with Armijo line search, we can infer how many Armijo line search backtracks are preformed:","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"f4 = MyCost(data)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"MyCost{Vector{Vector{Float64}}}([[-0.054658825167894595, -0.5592077846510423, -0.04738273828111257, -0.04682080720921302, 0.12279468849667038, 0.07171438895366239, -0.12930045409417057, -0.22102081626380404, -0.31805333254577767, 0.0065859500152017645 … -0.21999168261518043, 0.19570142227077295, 0.340909965798364, -0.0310802190082894, -0.04674431076254687, -0.006088297671169996, 0.01576037011323387, -0.14523596850249543, 0.14526158060820338, 0.1972125856685378], [-0.08192376929745249, -0.5097715132187676, -0.008339904915541005, 0.07289741328038676, 0.11422036270613797, -0.11546739299835748, 0.2296996932628472, 0.1490467170835958, -0.11124820565850364, -0.11790721606521781 … -0.16421249630470344, -0.2450575844467715, -0.07570080850379841, -0.07426218324072491, -0.026520181327346338, 0.11555341205250205, -0.0292955762365121, -0.09012096853677576, -0.23470556634911574, -0.026214242996704013], [-0.22951484264859257, -0.6083825348640186, 0.14273766477054015, -0.11947823367023377, 0.05984293499234536, 0.058820835498203126, 0.07577331705863266, 0.1632847202946857, 0.20244385489915745, 0.04389826920203656 … 0.3222365119325929, 0.009728730325524067, -0.12094785371632395, -0.36322323926212824, -0.0689253407939657, 0.23356953371702974, 0.23489531397909744, 0.078303336494718, -0.14272984135578806, 0.07844539956202407], [-0.0012588500237817606, -0.29958740415089763, 0.036738459489123514, 0.20567651907595125, -0.1131046432541904, -0.06032435985370224, 0.3366633723165895, -0.1694687746143405, -0.001987171245125281, 0.04933779858684409 … -0.2399584473006256, 0.19889267065775063, 0.22468755918787048, 0.1780090580180643, 0.023703860700539356, -0.10212737517121755, 0.03807004103115319, -0.20569120952458983, -0.03257704254233959, 0.06925473452536687], [-0.035534309946938375, -0.06645560787329002, 0.14823972268208874, -0.23913346587232426, 0.038347027875883496, 0.10453333143286662, 0.050933995140290705, -0.12319549375687473, 0.12956684644537844, -0.23540367869989412 … -0.41471772859912864, -0.1418984610380257, 0.0038321446836859334, 0.23655566917750157, -0.17500681300994742, -0.039189751036839374, -0.08687860620942896, -0.11509948162959047, 0.11378233994840942, 0.38739450723013735], [-0.3122539912469438, -0.3101935557860296, 0.1733113629107006, 0.08968593616209351, -0.1836344261367962, -0.06480023695256802, 0.18165070013886545, 0.19618275767992124, -0.07956460275570058, 0.0325997354656551 … 0.2845492418767769, 0.17406455870721682, -0.053101230371568706, -0.1382082812981627, 0.005830071475508364, 0.16739264037923055, 0.034365814374995335, 0.09107702398753297, -0.1877250428700409, 0.05116494897806923], [-0.04159442361185588, -0.7768029783272633, 0.06303616666722486, 0.08070518925253539, -0.07396265237309446, -0.06008109299719321, 0.07977141629715745, 0.019511027129056415, 0.08629917589924847, -0.11156298867318722 … 0.0792587504128044, -0.016444383900170008, -0.181746064577005, -0.01888129512990984, -0.13523922089388968, 0.11358102175659832, 0.07929049608459493, 0.1689565359083833, 0.07673657951723721, -0.1128480905648813], [-0.21221814304651335, -0.5031823821503253, 0.010326342133992458, -0.12438192100961257, 0.04004758695231872, 0.2280527500843805, -0.2096243232022162, -0.16564828762420294, -0.28325749481138984, 0.17033534605245823 … -0.13599096505924074, 0.28437770540525625, 0.08424426798544583, -0.1266207606984139, 0.04917635557603396, -0.00012608938533809706, -0.04283220254770056, -0.08771365647566572, 0.14750169103093985, 0.11601120086036351], [0.10683290707435536, -0.17680836277740156, 0.23767458301899405, 0.12011180867097299, -0.029404774462600154, 0.11522028383799933, -0.3318174480974519, -0.17859266746938374, 0.04352373642537759, 0.2530382802667988 … 0.08879861736692073, -0.004412506987801729, 0.19786810509925895, -0.1397104682727044, 0.09482328498485094, 0.05108149065160893, -0.14578343506951633, 0.3167479772660438, 0.10422673169182732, 0.21573150015891313], [-0.024895624707466164, -0.7473912016432697, -0.1392537238944721, -0.14948896791465557, -0.09765393283580377, 0.04413059403279867, -0.13865379004720355, -0.071032040283992, 0.15604054722246585, -0.10744260463413555 … -0.14748067081342833, -0.14743635071251024, 0.0643591937981352, 0.16138827697852615, -0.12656652133603935, -0.06463635704869083, 0.14329582429103488, -0.01113113793821713, 0.29295387893749997, 0.06774523575259782] … [0.011874845316569967, -0.6910596618389588, 0.21275741439477827, -0.014042545524367437, -0.07883613103495014, -0.0021900966696246776, -0.033836430464220496, 0.2925813113264835, -0.04718187201980008, 0.03949680289730036 … 0.0867736586603294, 0.0404682510051544, -0.24779813848587257, -0.28631514602877145, -0.07211767532456789, -0.15072898498180473, 0.017855923621826746, -0.09795357710255254, -0.14755229203084924, 0.1305005778855436], [0.013457629515450426, -0.3750353654626534, 0.12349883726772073, 0.3521803555005319, 0.2475921439420274, 0.006088649842999206, 0.31203183112392907, -0.036869203979483754, -0.07475746464056504, -0.029297797064479717 … 0.16867368684091563, -0.09450564983271922, -0.0587273302122711, -0.1326667940553803, -0.25530237980444614, 0.37556905374043376, 0.04922612067677609, 0.2605362549983866, -0.21871556587505667, -0.22915883767386164], [0.03295085436260177, -0.971861604433394, 0.034748713521512035, -0.0494065013245799, -0.01767479281403355, 0.0465459739459587, 0.007470494722096038, 0.003227960072276129, 0.0058328596338402365, -0.037591237446692356 … 0.03205152122876297, 0.11331109854742015, 0.03044900529526686, 0.017971704993311105, -0.009329252062960229, -0.02939354719650879, 0.022088835776251863, -0.02546111553658854, -0.0026257225461427582, 0.005702111697172774], [0.06968243992532257, -0.7119502191435176, -0.18136614593117445, -0.1695926215673451, 0.01725015359973796, -0.00694164951158388, -0.34621134287344574, 0.024709256792651912, -0.1632255805999673, -0.2158226433583082 … -0.14153772108081458, -0.11256850346909901, 0.045109821764180706, -0.1162754336222613, -0.13221711766357983, 0.005365354776191061, 0.012750671705879105, -0.018208207549835407, 0.12458753932455452, -0.31843587960340897], [-0.19830349374441875, -0.6086693423968884, 0.08552341811170468, 0.35781519334042255, 0.15790663648524367, 0.02712571268324985, 0.09855601327331667, -0.05840653973421127, -0.09546429767790429, -0.13414717696055448 … -0.0430935804718714, 0.2678584478951765, 0.08780994289014614, 0.01613469379498457, 0.0516187906322884, -0.07383067566731401, -0.1481272738354552, -0.010532317187265649, 0.06555344745952187, -0.1506167863762911], [-0.04347524125197773, -0.6327981074196994, -0.221116680035191, 0.0282207467940456, -0.0855024881522933, 0.12821801740178346, 0.1779499563280024, -0.10247384887512365, 0.0396432464100116, -0.0582580338112627 … 0.1253893207083573, 0.09628202269764763, 0.3165295473947355, -0.14915034201394833, -0.1376727867817772, -0.004153096613530293, 0.09277957650773738, 0.05917264554031624, -0.12230262590034507, -0.19655728521529914], [-0.10173946348675116, -0.6475660153977272, 0.1260284619729566, -0.11933160462857616, -0.04774310633937567, 0.09093928358804217, 0.041662676324043114, -0.1264739543938265, 0.09605293126911392, -0.16790474428001648 … -0.04056684573478108, 0.09351665120940456, 0.15259195558799882, 0.0009949298312580497, 0.09461980828206303, 0.3067004514287283, 0.16129258773733715, -0.18893664085007542, -0.1806865244492513, 0.029319680436405825], [-0.251780954320053, -0.39147463259941456, -0.24359579328578626, 0.30179309757665723, 0.21658893985206484, 0.12304585275893232, 0.28281133086451704, 0.029187615341955325, 0.03616243507191924, 0.029375588909979152 … -0.08071746662465404, -0.2176101928258658, 0.20944684921170825, 0.043033273425352715, -0.040505542460853576, 0.17935596149079197, -0.08454569418519972, 0.0545941597033932, 0.12471741052450099, -0.24314124407858329], [0.28156471341150974, -0.6708572780452595, -0.1410302363738465, -0.08322589397277698, -0.022772599832907418, -0.04447265789199677, -0.016448068022011157, -0.07490911512503738, 0.2778432295769144, -0.10191899088372378 … -0.057272155080983836, 0.12817478092201395, 0.04623814480781884, -0.12184190164369117, 0.1987855635987229, -0.14533603246124993, -0.16334072868597016, -0.052369977381939437, 0.014904286931394959, -0.2440882678882144], [0.12108727495744157, -0.714787344982596, 0.01632521838262752, 0.04437570556908449, -0.041199280304144284, 0.052984488452616, 0.03796520200156107, 0.2791785910964288, 0.11530429924056099, 0.12178223160398421 … -0.07621847481721669, 0.18353870423743013, -0.19066653731436745, -0.09423224997242206, 0.14596847781388494, -0.09747986927777111, 0.16041150122587072, -0.02296513951256738, 0.06786878373578588, 0.15296635978447756]], 0)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"To not get too many entries let’s just look at the first 20 iterations again","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"R4 = gradient_descent(\n M,\n f4,\n grad_f,\n data[1];\n record=[RecordCount(),],\n return_state=true,\n)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"# Solver state for `Manopt.jl`s Gradient Descent\nAfter 58 iterations\n\n## Parameters\n* retraction method: ExponentialRetraction()\n\n## Stepsize\nArmijoLinesearch(;\n initial_stepsize=1.0\n retraction_method=ExponentialRetraction()\n contraction_factor=0.95\n sufficient_decrease=0.1\n)\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 200: not reached\n |grad f| < 1.0e-8: reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Record\n(Iteration = RecordCount([25, 29, 33, 37, 40, 44, 48, 52, 56, 60, 64, 68, 72, 76, 80, 84, 88, 92, 96, 100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140, 144, 148, 152, 156, 160, 164, 168, 172, 176, 180, 184, 188, 192, 196, 200, 204, 208, 212, 216, 220, 224, 229, 232, 237, 241, 245, 247, 249]),)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"get_record(R4)","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"58-element Vector{Int64}:\n 25\n 29\n 33\n 37\n 40\n 44\n 48\n 52\n 56\n 60\n 64\n 68\n 72\n ⋮\n 208\n 212\n 216\n 220\n 224\n 229\n 232\n 237\n 241\n 245\n 247\n 249","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"We can see that the number of cost function calls varies, depending on how many line search backtrack steps were required to obtain a good stepsize.","category":"page"},{"location":"tutorials/HowToRecord/#Technical-details","page":"Record values","title":"Technical details","text":"","category":"section"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/HowToRecord/","page":"Record values","title":"Record values","text":"2024-11-21T20:38:39.559","category":"page"},{"location":"solvers/ChambollePock/#The-Riemannian-Chambolle-Pock-algorithm","page":"Chambolle-Pock","title":"The Riemannian Chambolle-Pock algorithm","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"The Riemannian Chambolle—Pock is a generalization of the Chambolle—Pock algorithm Chambolle and Pock [CP11] It is also known as primal-dual hybrid gradient (PDHG) or primal-dual proximal splitting (PDPS) algorithm.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"In order to minimize over pmathcal M the cost function consisting of In order to minimize a cost function consisting of","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"F(p) + G(Λ(p))","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"over pmathcal M","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"where Fmathcal M overlineℝ, Gmathcal N overlineℝ, and Λmathcal M mathcal N. If the manifolds mathcal M or mathcal N are not Hadamard, it has to be considered locally only, that is on geodesically convex sets mathcal C subset mathcal M and mathcal D subsetmathcal N such that Λ(mathcal C) subset mathcal D.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"The algorithm is available in four variants: exact versus linearized (see variant) as well as with primal versus dual relaxation (see relax). For more details, see Bergmann, Herzog, Silva Louzeiro, Tenbrinck and Vidal-Núñez [BHS+21]. In the following description is the case of the exact, primal relaxed Riemannian Chambolle—Pock algorithm.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"Given base points mmathcal C, n=Λ(m)mathcal D, initial primal and dual values p^(0) mathcal C, ξ_n^(0) T_n^*mathcal N, and primal and dual step sizes sigma_0, tau_0, relaxation theta_0, as well as acceleration gamma.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"As an initialization, perform bar p^(0) gets p^(0).","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"The algorithms performs the steps k=1 (until a StoppingCriterion is fulfilled with)","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"ξ^(k+1)_n = operatornameprox_tau_k G_n^*Bigl(ξ_n^(k) + tau_k bigl(log_n Λ (bar p^(k))bigr)^flatBigr)\np^(k+1) = operatornameprox_sigma_k Fbiggl(exp_p^(k)Bigl( operatornamePT_p^(k)gets mbigl(-sigma_k DΛ(m)^*ξ_n^(k+1)bigr)^sharpBigr)biggr)\nUpdate\ntheta_k = (1+2gammasigma_k)^-frac12\nsigma_k+1 = sigma_ktheta_k\ntau_k+1 = fractau_ktheta_k\nbar p^(k+1) = exp_p^(k+1)bigl(-theta_k log_p^(k+1) p^(k)bigr)","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"Furthermore you can exchange the exponential map, the logarithmic map, and the parallel transport by a retraction, an inverse retraction, and a vector transport.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"Finally you can also update the base points m and n during the iterations. This introduces a few additional vector transports. The same holds for the case Λ(m^(k))neq n^(k) at some point. All these cases are covered in the algorithm.","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"ChambollePock\nChambollePock!","category":"page"},{"location":"solvers/ChambollePock/#Manopt.ChambollePock","page":"Chambolle-Pock","title":"Manopt.ChambollePock","text":"ChambollePock(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)\nChambollePock!(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)\n\nPerform the Riemannian Chambolle—Pock algorithm.\n\nGiven a cost function mathcal Emathcal M ℝ of the form\n\nmathcal f(p) = F(p) + G( Λ(p) )\n\nwhere Fmathcal M ℝ, Gmathcal N ℝ, and Λmathcal M mathcal N.\n\nThis can be done inplace of p.\n\nInput parameters\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nN::AbstractManifold: a Riemannian manifold mathcal M\np: a point on the manifold mathcal M\nX: a tangent vector at the point p on the manifold mathcal M\nm: a point on the manifold mathcal M\nn: a point on the manifold mathcal N\nadjoint_linearized_operator: the adjoint DΛ^* of the linearized operator DΛ T_mmathcal M T_Λ(m)mathcal N)\nprox_F, prox_G_Dual: the proximal maps of F and G^ast_n\n\nnote that depending on the AbstractEvaluationType evaluation the last three parameters as well as the forward operator Λ and the linearized_forward_operator can be given as allocating functions (Manifolds, parameters) -> result or as mutating functions (Manifold, result, parameters) -> result` to spare allocations.\n\nBy default, this performs the exact Riemannian Chambolle Pock algorithm, see the optional parameter DΛ for their linearized variant.\n\nFor more details on the algorithm, see [BHS+21].\n\nKeyword Arguments\n\nacceleration=0.05: acceleration parameter\ndual_stepsize=1/sqrt(8): proximal parameter of the primal prox\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\ninverse_retraction_method_dual=default_inverse_retraction_method(N, typeof(n)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nΛ=missing: the (forward) operator Λ() (required for the :exact variant)\nlinearized_forward_operator=missing: its linearization DΛ() (required for the :linearized variant)\nprimal_stepsize=1/sqrt(8): proximal parameter of the dual prox\nrelaxation=1.: the relaxation parameter γ\nrelax=:primal: whether to relax the primal or dual\nvariant=:exact if Λ is missing, otherwise :linearized: variant to use. Note that this changes the arguments the forward_operator is called with.\nstopping_criterion=StopAfterIteration`(100): a functor indicating that the stopping criterion is fulfilled\nupdate_primal_base=missing: function to update m (identity by default/missing)\nupdate_dual_base=missing: function to update n (identity by default/missing)\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nvector_transport_method_dual=default_vector_transport_method(N, typeof(n)): a vector transport mathcal T_ to use, see the section on vector transports\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.ChambollePock!","page":"Chambolle-Pock","title":"Manopt.ChambollePock!","text":"ChambollePock(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)\nChambollePock!(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)\n\nPerform the Riemannian Chambolle—Pock algorithm.\n\nGiven a cost function mathcal Emathcal M ℝ of the form\n\nmathcal f(p) = F(p) + G( Λ(p) )\n\nwhere Fmathcal M ℝ, Gmathcal N ℝ, and Λmathcal M mathcal N.\n\nThis can be done inplace of p.\n\nInput parameters\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nN::AbstractManifold: a Riemannian manifold mathcal M\np: a point on the manifold mathcal M\nX: a tangent vector at the point p on the manifold mathcal M\nm: a point on the manifold mathcal M\nn: a point on the manifold mathcal N\nadjoint_linearized_operator: the adjoint DΛ^* of the linearized operator DΛ T_mmathcal M T_Λ(m)mathcal N)\nprox_F, prox_G_Dual: the proximal maps of F and G^ast_n\n\nnote that depending on the AbstractEvaluationType evaluation the last three parameters as well as the forward operator Λ and the linearized_forward_operator can be given as allocating functions (Manifolds, parameters) -> result or as mutating functions (Manifold, result, parameters) -> result` to spare allocations.\n\nBy default, this performs the exact Riemannian Chambolle Pock algorithm, see the optional parameter DΛ for their linearized variant.\n\nFor more details on the algorithm, see [BHS+21].\n\nKeyword Arguments\n\nacceleration=0.05: acceleration parameter\ndual_stepsize=1/sqrt(8): proximal parameter of the primal prox\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\ninverse_retraction_method_dual=default_inverse_retraction_method(N, typeof(n)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nΛ=missing: the (forward) operator Λ() (required for the :exact variant)\nlinearized_forward_operator=missing: its linearization DΛ() (required for the :linearized variant)\nprimal_stepsize=1/sqrt(8): proximal parameter of the dual prox\nrelaxation=1.: the relaxation parameter γ\nrelax=:primal: whether to relax the primal or dual\nvariant=:exact if Λ is missing, otherwise :linearized: variant to use. Note that this changes the arguments the forward_operator is called with.\nstopping_criterion=StopAfterIteration`(100): a functor indicating that the stopping criterion is fulfilled\nupdate_primal_base=missing: function to update m (identity by default/missing)\nupdate_dual_base=missing: function to update n (identity by default/missing)\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nvector_transport_method_dual=default_vector_transport_method(N, typeof(n)): a vector transport mathcal T_ to use, see the section on vector transports\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#State","page":"Chambolle-Pock","title":"State","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"ChambollePockState","category":"page"},{"location":"solvers/ChambollePock/#Manopt.ChambollePockState","page":"Chambolle-Pock","title":"Manopt.ChambollePockState","text":"ChambollePockState <: AbstractPrimalDualSolverState\n\nstores all options and variables within a linearized or exact Chambolle Pock.\n\nFields\n\nacceleration::R: acceleration factor\ndual_stepsize::R: proximal parameter of the dual prox\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\ninverse_retraction_method_dual::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nm::P: base point on mathcal M\nn::Q: base point on mathcal N\np::P: an initial point on p^(0) mathcal M\npbar::P: the relaxed iterate used in the next dual update step (when using :primal relaxation)\nprimal_stepsize::R: proximal parameter of the primal prox\nX::T: an initial tangent vector X^(0) T_p^(0)mathcal M\nXbar::T: the relaxed iterate used in the next primal update step (when using :dual relaxation)\nrelaxation::R: relaxation in the primal relaxation step (to compute pbar:\nrelax::Symbol: which variable to relax (:primalor:dual`:\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nvariant: whether to perform an :exact or :linearized Chambolle-Pock\nupdate_primal_base: function (pr, st, k) -> m to update the primal base\nupdate_dual_base: function (pr, st, k) -> n to update the dual base\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nvector_transport_method_dual::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nHere, P is a point type on mathcal M, T its tangent vector type, Q a point type on mathcal N, and R<:Real is a real number type\n\nwhere for the last two the functions a AbstractManoptProblemp, AbstractManoptSolverStateo and the current iterate i are the arguments. If you activate these to be different from the default identity, you have to provide p.Λ for the algorithm to work (which might be missing in the linearized case).\n\nConstructor\n\nChambollePockState(M::AbstractManifold, N::AbstractManifold;\n kwargs...\n) where {P, Q, T, R <: Real}\n\nKeyword arguments\n\nn=[rand](@extref Base.rand-Tuple{AbstractManifold})(N)`\np=rand(M)\nm=rand(M)\nX=zero_vector(M, p)\nacceleration=0.0\ndual_stepsize=1/sqrt(8)\nprimal_stepsize=1/sqrt(8)\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\ninverse_retraction_method_dual=default_inverse_retraction_method(N, typeof(n)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nrelaxation=1.0\nrelax=:primal: relax the primal variable by default\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(300): a functor indicating that the stopping criterion is fulfilled\nvariant=:exact: run the exact Chambolle Pock by default\nupdate_primal_base=missing\nupdate_dual_base=missing\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nvector_transport_method_dual=default_vector_transport_method(N, typeof(n)): a vector transport mathcal T_ to use, see the section on vector transports\n\nif Manifolds.jl is loaded, N is also a keyword argument and set to TangentBundle(M) by default.\n\n\n\n\n\n","category":"type"},{"location":"solvers/ChambollePock/#Useful-terms","page":"Chambolle-Pock","title":"Useful terms","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"primal_residual\ndual_residual","category":"page"},{"location":"solvers/ChambollePock/#Manopt.primal_residual","page":"Chambolle-Pock","title":"Manopt.primal_residual","text":"primal_residual(p, o, x_old, X_old, n_old)\n\nCompute the primal residual at current iterate k given the necessary values x_k-1 X_k-1, and n_k-1 from the previous iterate.\n\nBigllVert\nfrac1σoperatornameretr^-1_x_kx_k-1 -\nV_x_kgets m_kbigl(DΛ^*(m_k)biglV_n_kgets n_k-1X_k-1 - X_k bigr\nBigrrVert\n\nwhere V_gets is the vector transport used in the ChambollePockState\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.dual_residual","page":"Chambolle-Pock","title":"Manopt.dual_residual","text":"dual_residual(p, o, x_old, X_old, n_old)\n\nCompute the dual residual at current iterate k given the necessary values x_k-1 X_k-1, and n_k-1 from the previous iterate. The formula is slightly different depending on the o.variant used:\n\nFor the :linearized it reads\n\nBigllVert\nfrac1τbigl(\nV_n_kgets n_k-1(X_k-1)\n- X_k\nbigr)\n-\nDΛ(m_k)bigl\nV_m_kgets x_koperatornameretr^-1_x_kx_k-1\nbigr\nBigrrVert\n\nand for the :exact variant\n\nBigllVert\nfrac1τ V_n_kgets n_k-1(X_k-1)\n-\noperatornameretr^-1_n_kbigl(\nΛ(operatornameretr_m_k(V_m_kgets x_koperatornameretr^-1_x_kx_k-1))\nbigr)\nBigrrVert\n\nwhere in both cases V_gets is the vector transport used in the ChambollePockState.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Debug","page":"Chambolle-Pock","title":"Debug","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"DebugDualBaseIterate\nDebugDualBaseChange\nDebugPrimalBaseIterate\nDebugPrimalBaseChange\nDebugDualChange\nDebugDualIterate\nDebugDualResidual\nDebugPrimalChange\nDebugPrimalIterate\nDebugPrimalResidual\nDebugPrimalDualResidual","category":"page"},{"location":"solvers/ChambollePock/#Manopt.DebugDualBaseIterate","page":"Chambolle-Pock","title":"Manopt.DebugDualBaseIterate","text":"DebugDualBaseIterate(io::IO=stdout)\n\nPrint the dual base variable by using DebugEntry, see their constructors for detail. This method is further set display o.n.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugDualBaseChange","page":"Chambolle-Pock","title":"Manopt.DebugDualBaseChange","text":"DebugDualChange(; storage=StoreStateAction([:n]), io::IO=stdout)\n\nPrint the change of the dual base variable by using DebugEntryChange, see their constructors for detail, on o.n.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalBaseIterate","page":"Chambolle-Pock","title":"Manopt.DebugPrimalBaseIterate","text":"DebugPrimalBaseIterate()\n\nPrint the primal base variable by using DebugEntry, see their constructors for detail. This method is further set display o.m.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalBaseChange","page":"Chambolle-Pock","title":"Manopt.DebugPrimalBaseChange","text":"DebugPrimalBaseChange(a::StoreStateAction=StoreStateAction([:m]),io::IO=stdout)\n\nPrint the change of the primal base variable by using DebugEntryChange, see their constructors for detail, on o.n.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugDualChange","page":"Chambolle-Pock","title":"Manopt.DebugDualChange","text":"DebugDualChange(opts...)\n\nPrint the change of the dual variable, similar to DebugChange, see their constructors for detail, but with a different calculation of the change, since the dual variable lives in (possibly different) tangent spaces.\n\n\n\n\n\n","category":"type"},{"location":"solvers/ChambollePock/#Manopt.DebugDualIterate","page":"Chambolle-Pock","title":"Manopt.DebugDualIterate","text":"DebugDualIterate(e)\n\nPrint the dual variable by using DebugEntry, see their constructors for detail. This method is further set display o.X.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugDualResidual","page":"Chambolle-Pock","title":"Manopt.DebugDualResidual","text":"DebugDualResidual <: DebugAction\n\nA Debug action to print the dual residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.\n\nConstructor\n\nDebugDualResidual(; kwargs...)\n\nKeyword warguments\n\nio=stdout`: stream to perform the debug to\nformat=\"$prefix%s\": format to print the dual residual, using the\nprefix=\"Dual Residual: \": short form to just set the prefix\nstorage (a new StoreStateAction) to store values for the debug.\n\n\n\n\n\n","category":"type"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalChange","page":"Chambolle-Pock","title":"Manopt.DebugPrimalChange","text":"DebugPrimalChange(opts...)\n\nPrint the change of the primal variable by using DebugChange, see their constructors for detail.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalIterate","page":"Chambolle-Pock","title":"Manopt.DebugPrimalIterate","text":"DebugPrimalIterate(opts...;kwargs...)\n\nPrint the change of the primal variable by using DebugIterate, see their constructors for detail.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalResidual","page":"Chambolle-Pock","title":"Manopt.DebugPrimalResidual","text":"DebugPrimalResidual <: DebugAction\n\nA Debug action to print the primal residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.\n\nConstructor\n\nDebugPrimalResidual(; kwargs...)\n\nKeyword warguments\n\nio=stdout`: stream to perform the debug to\nformat=\"$prefix%s\": format to print the dual residual, using the\nprefix=\"Primal Residual: \": short form to just set the prefix\nstorage (a new StoreStateAction) to store values for the debug.\n\n\n\n\n\n","category":"type"},{"location":"solvers/ChambollePock/#Manopt.DebugPrimalDualResidual","page":"Chambolle-Pock","title":"Manopt.DebugPrimalDualResidual","text":"DebugPrimalDualResidual <: DebugAction\n\nA Debug action to print the primal dual residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.\n\nConstructor\n\nDebugPrimalDualResidual()\n\nwith the keywords\n\nKeyword warguments\n\nio=stdout`: stream to perform the debug to\nformat=\"$prefix%s\": format to print the dual residual, using the\nprefix=\"PD Residual: \": short form to just set the prefix\nstorage (a new StoreStateAction) to store values for the debug.\n\n\n\n\n\n","category":"type"},{"location":"solvers/ChambollePock/#Record","page":"Chambolle-Pock","title":"Record","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"RecordDualBaseIterate\nRecordDualBaseChange\nRecordDualChange\nRecordDualIterate\nRecordPrimalBaseIterate\nRecordPrimalBaseChange\nRecordPrimalChange\nRecordPrimalIterate","category":"page"},{"location":"solvers/ChambollePock/#Manopt.RecordDualBaseIterate","page":"Chambolle-Pock","title":"Manopt.RecordDualBaseIterate","text":"RecordDualBaseIterate(n)\n\nCreate an RecordAction that records the dual base point, an RecordEntry of o.n.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordDualBaseChange","page":"Chambolle-Pock","title":"Manopt.RecordDualBaseChange","text":"RecordDualBaseChange(e)\n\nCreate an RecordAction that records the dual base point change, an RecordEntryChange of o.n with distance to the last value to store a value.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordDualChange","page":"Chambolle-Pock","title":"Manopt.RecordDualChange","text":"RecordDualChange()\n\nCreate the action either with a given (shared) Storage, which can be set to the values Tuple, if that is provided).\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordDualIterate","page":"Chambolle-Pock","title":"Manopt.RecordDualIterate","text":"RecordDualIterate(X)\n\nCreate an RecordAction that records the dual base point, an RecordEntry of o.X.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordPrimalBaseIterate","page":"Chambolle-Pock","title":"Manopt.RecordPrimalBaseIterate","text":"RecordPrimalBaseIterate(x)\n\nCreate an RecordAction that records the primal base point, an RecordEntry of o.m.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordPrimalBaseChange","page":"Chambolle-Pock","title":"Manopt.RecordPrimalBaseChange","text":"RecordPrimalBaseChange()\n\nCreate an RecordAction that records the primal base point change, an RecordEntryChange of o.m with distance to the last value to store a value.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordPrimalChange","page":"Chambolle-Pock","title":"Manopt.RecordPrimalChange","text":"RecordPrimalChange(a)\n\nCreate an RecordAction that records the primal value change, RecordChange, to record the change of o.x.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Manopt.RecordPrimalIterate","page":"Chambolle-Pock","title":"Manopt.RecordPrimalIterate","text":"RecordDualBaseIterate(x)\n\nCreate an RecordAction that records the dual base point, an RecordIterate of o.x.\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#Internals","page":"Chambolle-Pock","title":"Internals","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"Manopt.update_prox_parameters!","category":"page"},{"location":"solvers/ChambollePock/#Manopt.update_prox_parameters!","page":"Chambolle-Pock","title":"Manopt.update_prox_parameters!","text":"update_prox_parameters!(o)\n\nupdate the prox parameters as described in Algorithm 2 of [CP11],\n\nθ_n = frac1sqrt1+2γτ_n\nτ_n+1 = θ_nτ_n\nσ_n+1 = fracσ_nθ_n\n\n\n\n\n\n","category":"function"},{"location":"solvers/ChambollePock/#sec-cp-technical-details","page":"Chambolle-Pock","title":"Technical details","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"The ChambollePock solver requires the following functions of a manifold to be available for both the manifold mathcal Mand mathcal N","category":"page"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= or retraction_method_dual= (for mathcal N) does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= or vector_transport_method_dual= (for mathcal N) does not have to be specified.\nA `copyto!(M, q, p) and copy(M,p) for points.","category":"page"},{"location":"solvers/ChambollePock/#Literature","page":"Chambolle-Pock","title":"Literature","text":"","category":"section"},{"location":"solvers/ChambollePock/","page":"Chambolle-Pock","title":"Chambolle-Pock","text":"R. Bergmann, R. Herzog, M. Silva Louzeiro, D. Tenbrinck and J. Vidal-Núñez. Fenchel duality theory and a primal-dual algorithm on Riemannian manifolds. Foundations of Computational Mathematics 21, 1465–1504 (2021), arXiv:1908.02022.\n\n\n\nA. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40, 120–145 (2011).\n\n\n\n","category":"page"},{"location":"solvers/conjugate_residual/#Conjugate-residual-solver-in-a-Tangent-space","page":"Conjugate Residual","title":"Conjugate residual solver in a Tangent space","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"conjugate_residual\nconjugate_residual!","category":"page"},{"location":"solvers/conjugate_residual/#Manopt.conjugate_residual","page":"Conjugate Residual","title":"Manopt.conjugate_residual","text":"conjugate_residual(TpM::TangentSpace, A, b, X=zero_vector(TpM))\nconjugate_residual(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X=zero_vector(TpM))\nconjugate_residual!(TpM::TangentSpace, A, b, X)\nconjugate_residual!(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)\n\nCompute the solution of mathcal A(p)X + b(p) = 0_p, where\n\nmathcal A is a linear, symmetric operator on T_pmathcal M\nb is a vector field on the manifold\nX T_pmathcal M is a tangent vector\n0_p is the zero vector T_pmathcal M.\n\nThis implementation follows Algorithm 3 in [LY24] and is initalised with X^(0) as the zero vector and\n\nthe initial residual r^(0) = -b(p) - mathcal A(p)X^(0)\nthe initial conjugate direction d^(0) = r^(0)\ninitialize Y^(0) = mathcal A(p)X^(0)\n\nperformed the following steps at iteration k=0 until the stopping_criterion is fulfilled.\n\ncompute a step size α_k = displaystylefrac r^(k) mathcal A(p)r^(k) _p mathcal A(p)d^(k) mathcal A(p)d^(k) _p\ndo a step X^(k+1) = X^(k) + α_kd^(k)\nupdate the residual r^(k+1) = r^(k) + α_k Y^(k)\ncompute Z = mathcal A(p)r^(k+1)\nUpdate the conjugate coefficient β_k = displaystylefrac r^(k+1) mathcal A(p)r^(k+1) _p r^(k) mathcal A(p)r^(k) _p\nUpdate the conjugate direction d^(k+1) = r^(k+1) + β_kd^(k)\nUpdate Y^(k+1) = -Z + β_k Y^(k)\n\nNote that the right hand side of Step 7 is the same as evaluating mathcal Ad^(k+1), but avoids the actual evaluation\n\nInput\n\nTpM the TangentSpace as the domain\nA a symmetric linear operator on the tangent space (M, p, X) -> Y\nb a vector field on the tangent space (M, p) -> X\nX the initial tangent vector\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nstopping_criterion=StopAfterIteration(manifold_dimension(M)|StopWhenRelativeResidualLess(c,1e-8), where c is lVert b rVert_: a functor indicating that the stopping criterion is fulfilled\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/conjugate_residual/#Manopt.conjugate_residual!","page":"Conjugate Residual","title":"Manopt.conjugate_residual!","text":"conjugate_residual(TpM::TangentSpace, A, b, X=zero_vector(TpM))\nconjugate_residual(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X=zero_vector(TpM))\nconjugate_residual!(TpM::TangentSpace, A, b, X)\nconjugate_residual!(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)\n\nCompute the solution of mathcal A(p)X + b(p) = 0_p, where\n\nmathcal A is a linear, symmetric operator on T_pmathcal M\nb is a vector field on the manifold\nX T_pmathcal M is a tangent vector\n0_p is the zero vector T_pmathcal M.\n\nThis implementation follows Algorithm 3 in [LY24] and is initalised with X^(0) as the zero vector and\n\nthe initial residual r^(0) = -b(p) - mathcal A(p)X^(0)\nthe initial conjugate direction d^(0) = r^(0)\ninitialize Y^(0) = mathcal A(p)X^(0)\n\nperformed the following steps at iteration k=0 until the stopping_criterion is fulfilled.\n\ncompute a step size α_k = displaystylefrac r^(k) mathcal A(p)r^(k) _p mathcal A(p)d^(k) mathcal A(p)d^(k) _p\ndo a step X^(k+1) = X^(k) + α_kd^(k)\nupdate the residual r^(k+1) = r^(k) + α_k Y^(k)\ncompute Z = mathcal A(p)r^(k+1)\nUpdate the conjugate coefficient β_k = displaystylefrac r^(k+1) mathcal A(p)r^(k+1) _p r^(k) mathcal A(p)r^(k) _p\nUpdate the conjugate direction d^(k+1) = r^(k+1) + β_kd^(k)\nUpdate Y^(k+1) = -Z + β_k Y^(k)\n\nNote that the right hand side of Step 7 is the same as evaluating mathcal Ad^(k+1), but avoids the actual evaluation\n\nInput\n\nTpM the TangentSpace as the domain\nA a symmetric linear operator on the tangent space (M, p, X) -> Y\nb a vector field on the tangent space (M, p) -> X\nX the initial tangent vector\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nstopping_criterion=StopAfterIteration(manifold_dimension(M)|StopWhenRelativeResidualLess(c,1e-8), where c is lVert b rVert_: a functor indicating that the stopping criterion is fulfilled\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/conjugate_residual/#State","page":"Conjugate Residual","title":"State","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"ConjugateResidualState","category":"page"},{"location":"solvers/conjugate_residual/#Manopt.ConjugateResidualState","page":"Conjugate Residual","title":"Manopt.ConjugateResidualState","text":"ConjugateResidualState{T,R,TStop<:StoppingCriterion} <: AbstractManoptSolverState\n\nA state for the conjugate_residual solver.\n\nFields\n\nX::T: the iterate\nr::T: the residual r = -b(p) - mathcal A(p)X\nd::T: the conjugate direction\nAr::T, Ad::T: storages for mathcal A(p)d, mathcal A(p)r\nrAr::R: internal field for storing r mathcal A(p)r \nα::R: a step length\nβ::R: the conjugate coefficient\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\n\nConstructor\n\nConjugateResidualState(TpM::TangentSpace,slso::SymmetricLinearSystemObjective; kwargs...)\n\nInitialise the state with default values.\n\nKeyword arguments\n\nr=-get_gradient(TpM, slso, X)\nd=copy(TpM, r)\nAr=get_hessian(TpM, slso, X, r)\nAd=copy(TpM, Ar)\nα::R=0.0\nβ::R=0.0\nstopping_criterion=StopAfterIteration(manifold_dimension(M))|StopWhenGradientNormLess(1e-8): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nSee also\n\nconjugate_residual\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_residual/#Objective","page":"Conjugate Residual","title":"Objective","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"SymmetricLinearSystemObjective","category":"page"},{"location":"solvers/conjugate_residual/#Manopt.SymmetricLinearSystemObjective","page":"Conjugate Residual","title":"Manopt.SymmetricLinearSystemObjective","text":"SymmetricLinearSystemObjective{E<:AbstractEvaluationType,TA,T} <: AbstractManifoldObjective{E}\n\nModel the objective\n\nf(X) = frac12 lVert mathcal AX + b rVert_p^2qquad X T_pmathcal M\n\ndefined on the tangent space T_pmathcal M at p on the manifold mathcal M.\n\nIn other words this is an objective to solve mathcal A = -b(p) for some linear symmetric operator and a vector function. Note the minus on the right hand side, which makes this objective especially tailored for (iteratively) solving Newton-like equations.\n\nFields\n\nA!!: a symmetric, linear operator on the tangent space\nb!!: a gradient function\n\nwhere A!! can work as an allocating operator (M, p, X) -> Y or an in-place one (M, Y, p, X) -> Y, and similarly b!! can either be a function (M, p) -> X or (M, X, p) -> X. The first variants allocate for the result, the second variants work in-place.\n\nConstructor\n\nSymmetricLinearSystemObjective(A, b; evaluation=AllocatingEvaluation())\n\nGenerate the objective specifying whether the two parts work allocating or in-place.\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_residual/#Additional-stopping-criterion","page":"Conjugate Residual","title":"Additional stopping criterion","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"StopWhenRelativeResidualLess","category":"page"},{"location":"solvers/conjugate_residual/#Manopt.StopWhenRelativeResidualLess","page":"Conjugate Residual","title":"Manopt.StopWhenRelativeResidualLess","text":"StopWhenRelativeResidualLess <: StoppingCriterion\n\nStop when re relative residual in the conjugate_residual is below a certain threshold, i.e.\n\ndisplaystylefraclVert r^(k) rVert_c ε\n\nwhere c = lVert b rVert_ of the initial vector from the vector field in mathcal A(p)X + b(p) = 0_p, from the conjugate_residual\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\nc: the initial norm\nε: the threshold\nnorm_rk: the last computed norm of the residual\n\nConstructor\n\nStopWhenRelativeResidualLess(c, ε; norm_r = 2*c*ε)\n\nInitialise the stopping criterion.\n\nnote: Note\nThe initial norm of the vector field c = lVert b rVert_ that is stored internally is updated on initialisation, that is, if this stopping criterion is called with k<=0.\n\n\n\n\n\n","category":"type"},{"location":"solvers/conjugate_residual/#Internal-functions","page":"Conjugate Residual","title":"Internal functions","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"Manopt.get_b","category":"page"},{"location":"solvers/conjugate_residual/#Manopt.get_b","page":"Conjugate Residual","title":"Manopt.get_b","text":"get_b(TpM::TangentSpace, slso::SymmetricLinearSystemObjective)\n\nevaluate the stored value for computing the right hand side b in mathcal A=-b.\n\n\n\n\n\n","category":"function"},{"location":"solvers/conjugate_residual/#Literature","page":"Conjugate Residual","title":"Literature","text":"","category":"section"},{"location":"solvers/conjugate_residual/","page":"Conjugate Residual","title":"Conjugate Residual","text":"Z. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.\n\n\n\n","category":"page"},{"location":"tutorials/EmbeddingObjectives/#How-to-define-the-cost-in-the-embedding","page":"Define objectives in the embedding","title":"How to define the cost in the embedding","text":"","category":"section"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Specifying a cost function f mathcal M ℝ on a manifold is usually the model one starts with. Specifying its gradient operatornamegrad f mathcal M Tmathcal M, or more precisely operatornamegradf(p) T_pmathcal M, and eventually a Hessian operatornameHess f T_pmathcal M T_pmathcal M are then necessary to perform optimization. Since these might be challenging to compute, especially when manifolds and differential geometry are not the main area of a user, easier to use methods might be welcome.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"This tutorial discusses how to specify f in the embedding as tilde f, maybe only locally around the manifold, and use the Euclidean gradient tilde f and Hessian ^2 tilde f within Manopt.jl.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"For the theoretical background see convert an Euclidean to an Riemannian Gradient, or Section 4.7 of [Bou23] for the gradient part or Section 5.11 as well as [Ngu23] for the background on converting Hessians.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Here we use the Examples 9.40 and 9.49 of [Bou23] and compare the different methods, one can call the solver, depending on which gradient and/or Hessian one provides.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"using Manifolds, Manopt, ManifoldDiff\nusing LinearAlgebra, Random, Colors, Plots\nRandom.seed!(123)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"We consider the cost function on the Grassmann manifold given by","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"n = 5\nk = 2\nM = Grassmann(5,2)\nA = Symmetric(rand(n,n));","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"f(M, p) = 1 / 2 * tr(p' * A * p)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Note that this implementation is already also a valid implementation / continuation of f into the (lifted) embedding of the Grassmann manifold. In the implementation we can use f for both the Euclidean tilde f and the Grassmann case f.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Its Euclidean gradient nabla f and Hessian nabla^2f are easy to compute as","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"∇f(M, p) = A * p\n∇²f(M,p,X) = A*X","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"On the other hand, from the aforementioned Example 9.49 we can also state the Riemannian gradient and Hessian for comparison as","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"grad_f(M, p) = A * p - p * (p' * A * p)\nHess_f(M, p, X) = A * X - p * p' * A * X - X * p' * A * p","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"We can verify that these are the correct at least numerically by calling the check_gradient","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"check_gradient(M, f, grad_f; plot=true)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"(Image: )","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"and the check_Hessian, which requires a bit more tolerance in its linearity verification","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"check_Hessian(M, f, grad_f, Hess_f; plot=true, error=:error, atol=1e-15)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"(Image: )","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"While they look reasonable here and were already derived, for the general case this derivation might be more complicated.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Luckily there exist two functions in ManifoldDiff.jl that are implemented for several manifolds from Manifolds.jl, namely riemannian_gradient(M, p, eG) that converts a Riemannian gradient eG=nabla tilde f(p) into a the Riemannian one operatornamegrad f(p) and riemannian_Hessian(M, p, eG, eH, X) which converts the Euclidean Hessian eH=nabla^2 tilde f(p)X into operatornameHess f(p)X, where we also require the Euclidean gradient eG=nabla tilde f(p).","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"So we can define","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"grad2_f(M, p) = riemannian_gradient(M, p, ∇f(get_embedding(M), embed(M, p)))","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"where only formally we here call embed(M,p) before passing p to the Euclidean gradient, though here (for the Grassmann manifold with Stiefel representation) the embedding function is the identity.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Similarly for the Hessian, where in our example the embeddings of both the points and tangent vectors are the identity.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"function Hess2_f(M, p, X)\n return riemannian_Hessian(\n M,\n p,\n ∇f(get_embedding(M), embed(M, p)),\n ∇²f(get_embedding(M), embed(M, p), embed(M, p, X)),\n X\n )\nend","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"And we can again verify these numerically,","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"check_gradient(M, f, grad2_f; plot=true)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"(Image: )","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"and","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"check_Hessian(M, f, grad2_f, Hess2_f; plot=true, error=:error, atol=1e-14)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"(Image: )","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"which yields the same result, but we see that the Euclidean conversion might be a bit less stable.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Now if we want to use these in optimization we would require these two functions to call e.g.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"p0 = [1.0 0.0; 0.0 1.0; 0.0 0.0; 0.0 0.0; 0.0 0.0]\nr1 = adaptive_regularization_with_cubics(\n M,\n f,\n grad_f,\n Hess_f,\n p0;\n debug=[:Iteration, :Cost, \"\\n\"],\n return_objective=true,\n return_state=true,\n)\nq1 = get_solver_result(r1)\nr1","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Initial f(x): 0.666814\n# 1 f(x): 0.329582\n# 2 f(x): -0.251913\n# 3 f(x): -0.451908\n# 4 f(x): -0.604753\n# 5 f(x): -0.608791\n# 6 f(x): -0.608797\n# 7 f(x): -0.608797\n\n# Solver state for `Manopt.jl`s Adaptive Regularization with Cubics (ARC)\nAfter 7 iterations\n\n## Parameters\n* η1 | η2 : 0.1 | 0.9\n* γ1 | γ2 : 0.1 | 2.0\n* σ (σmin) : 0.0004082482904638632 (1.0e-10)\n* ρ (ρ_regularization) : 1.0002163851951777 (1000.0)\n* retraction method : ExponentialRetraction()\n* sub solver state :\n | # Solver state for `Manopt.jl`s Lanczos Iteration\n | After 6 iterations\n | \n | ## Parameters\n | * σ : 0.0040824829046386315\n | * # of Lanczos vectors used : 6\n | \n | ## Stopping criteria\n | (a) For the Lanczos Iteration\n | Stop When _one_ of the following are fulfilled:\n | Max Iteration 6: reached\n | First order progress with θ=0.5: not reached\n | Overall: reached\n | (b) For the Newton sub solver\n | Max Iteration 200: not reached\n | This indicates convergence: No\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 40: not reached\n |grad f| < 1.0e-9: reached\n All Lanczos vectors (5) used: not reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Debug\n :Iteration = [ (:Iteration, \"# %-6d\"), (:Cost, \"f(x): %f\"), \"\\n\" ]","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"but if you choose to go for the conversions, then, thinking of the embedding and defining two new functions might be tedious. There is a shortcut for these, which performs the change internally, when necessary by specifying objective_type=:Euclidean.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"r2 = adaptive_regularization_with_cubics(\n M,\n f,\n ∇f,\n ∇²f,\n p0;\n # The one line different to specify our grad/Hess are Eucldiean:\n objective_type=:Euclidean,\n debug=[:Iteration, :Cost, \"\\n\"],\n return_objective=true,\n return_state=true,\n)\nq2 = get_solver_result(r2)\nr2","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Initial f(x): 0.666814\n# 1 f(x): 0.329582\n# 2 f(x): -0.251913\n# 3 f(x): -0.451908\n# 4 f(x): -0.604753\n# 5 f(x): -0.608791\n# 6 f(x): -0.608797\n# 7 f(x): -0.608797\n\n# Solver state for `Manopt.jl`s Adaptive Regularization with Cubics (ARC)\nAfter 7 iterations\n\n## Parameters\n* η1 | η2 : 0.1 | 0.9\n* γ1 | γ2 : 0.1 | 2.0\n* σ (σmin) : 0.0004082482904638632 (1.0e-10)\n* ρ (ρ_regularization) : 1.000409105075989 (1000.0)\n* retraction method : ExponentialRetraction()\n* sub solver state :\n | # Solver state for `Manopt.jl`s Lanczos Iteration\n | After 6 iterations\n | \n | ## Parameters\n | * σ : 0.0040824829046386315\n | * # of Lanczos vectors used : 6\n | \n | ## Stopping criteria\n | (a) For the Lanczos Iteration\n | Stop When _one_ of the following are fulfilled:\n | Max Iteration 6: reached\n | First order progress with θ=0.5: not reached\n | Overall: reached\n | (b) For the Newton sub solver\n | Max Iteration 200: not reached\n | This indicates convergence: No\n\n## Stopping criterion\n\nStop When _one_ of the following are fulfilled:\n Max Iteration 40: not reached\n |grad f| < 1.0e-9: reached\n All Lanczos vectors (5) used: not reached\nOverall: reached\nThis indicates convergence: Yes\n\n## Debug\n :Iteration = [ (:Iteration, \"# %-6d\"), (:Cost, \"f(x): %f\"), \"\\n\" ]","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"which returns the same result, see","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"distance(M, q1, q2)","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"5.599906634890012e-16","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"This conversion also works for the gradients of constraints, and is passed down to subsolvers by default when these are created using the Euclidean objective f, nabla f and nabla^2 f.","category":"page"},{"location":"tutorials/EmbeddingObjectives/#Summary","page":"Define objectives in the embedding","title":"Summary","text":"","category":"section"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"If you have the Euclidean gradient (or Hessian) available for a solver call, all you need to provide is objective_type=:Euclidean to convert the objective to a Riemannian one.","category":"page"},{"location":"tutorials/EmbeddingObjectives/#Literature","page":"Define objectives in the embedding","title":"Literature","text":"","category":"section"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"N. Boumal. An Introduction to Optimization on Smooth Manifolds. First Edition (Cambridge University Press, 2023).\n\n\n\nD. Nguyen. Operator-Valued Formulas for Riemannian Gradient and Hessian and Families of Tractable Metrics in Riemannian Optimization. Journal of Optimization Theory and Applications 198, 135–164 (2023), arXiv:2009.10159.\n\n\n\n","category":"page"},{"location":"tutorials/EmbeddingObjectives/#Technical-details","page":"Define objectives in the embedding","title":"Technical details","text":"","category":"section"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/EmbeddingObjectives/","page":"Define objectives in the embedding","title":"Define objectives in the embedding","text":"2024-11-21T20:37:41.341","category":"page"},{"location":"solvers/alternating_gradient_descent/#solver-alternating-gradient-descent","page":"Alternating Gradient Descent","title":"Alternating gradient descent","text":"","category":"section"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"alternating_gradient_descent\nalternating_gradient_descent!","category":"page"},{"location":"solvers/alternating_gradient_descent/#Manopt.alternating_gradient_descent","page":"Alternating Gradient Descent","title":"Manopt.alternating_gradient_descent","text":"alternating_gradient_descent(M::ProductManifold, f, grad_f, p=rand(M))\nalternating_gradient_descent(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)\nalternating_gradient_descent!(M::ProductManifold, f, grad_f, p)\nalternating_gradient_descent!(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)\n\nperform an alternating gradient descent. This can be done in-place of the start point p\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: a gradient, that can be of two cases\nis a single function returning an ArrayPartition from RecursiveArrayTools.jl or\nis a vector functions each returning a component part of the whole gradient\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom), a per cycle permuted sequence (:Random) or the default :Linear one.\ninner_iterations=5: how many gradient steps to take in a component before alternating to the next\nstopping_criterion=StopAfterIteration(1000)): a functor indicating that the stopping criterion is fulfilled\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size\norder=[1:n]: the initial permutation, where n is the number of gradients in gradF.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nOutput\n\nusually the obtained (approximate) minimizer, see get_solver_return for details\n\nnote: Note\nThe input of each of the (component) gradients is still the whole vector X, just that all other then the ith input component are assumed to be fixed and just the ith components gradient is computed / returned.\n\n\n\n\n\n","category":"function"},{"location":"solvers/alternating_gradient_descent/#Manopt.alternating_gradient_descent!","page":"Alternating Gradient Descent","title":"Manopt.alternating_gradient_descent!","text":"alternating_gradient_descent(M::ProductManifold, f, grad_f, p=rand(M))\nalternating_gradient_descent(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)\nalternating_gradient_descent!(M::ProductManifold, f, grad_f, p)\nalternating_gradient_descent!(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)\n\nperform an alternating gradient descent. This can be done in-place of the start point p\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: a gradient, that can be of two cases\nis a single function returning an ArrayPartition from RecursiveArrayTools.jl or\nis a vector functions each returning a component part of the whole gradient\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nevaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom), a per cycle permuted sequence (:Random) or the default :Linear one.\ninner_iterations=5: how many gradient steps to take in a component before alternating to the next\nstopping_criterion=StopAfterIteration(1000)): a functor indicating that the stopping criterion is fulfilled\nstepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size\norder=[1:n]: the initial permutation, where n is the number of gradients in gradF.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nOutput\n\nusually the obtained (approximate) minimizer, see get_solver_return for details\n\nnote: Note\nThe input of each of the (component) gradients is still the whole vector X, just that all other then the ith input component are assumed to be fixed and just the ith components gradient is computed / returned.\n\n\n\n\n\n","category":"function"},{"location":"solvers/alternating_gradient_descent/#State","page":"Alternating Gradient Descent","title":"State","text":"","category":"section"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"AlternatingGradientDescentState","category":"page"},{"location":"solvers/alternating_gradient_descent/#Manopt.AlternatingGradientDescentState","page":"Alternating Gradient Descent","title":"Manopt.AlternatingGradientDescentState","text":"AlternatingGradientDescentState <: AbstractGradientDescentSolverState\n\nStore the fields for an alternating gradient descent algorithm, see also alternating_gradient_descent.\n\nFields\n\ndirection::DirectionUpdateRule\nevaluation_order::Symbol: whether to use a randomly permuted sequence (:FixedRandom), a per cycle newly permuted sequence (:Random) or the default :Linear evaluation order.\ninner_iterations: how many gradient steps to take in a component before alternating to the next\norder: the current permutation\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\np::P: a point on the manifold mathcal Mstoring the current iterate\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\nk, ì`: internal counters for the outer and inner iterations, respectively.\n\nConstructors\n\nAlternatingGradientDescentState(M::AbstractManifold; kwargs...)\n\nKeyword arguments\n\ninner_iterations=5\np=rand(M): a point on the manifold mathcal M\norder_type::Symbol=:Linear\norder::Vector{<:Int}=Int[]\nstopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled\nstepsize=default_stepsize(M, AlternatingGradientDescentState): a functor inheriting from Stepsize to determine a step size\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nGenerate the options for point p and where inner_iterations, order_type, order, retraction_method, stopping_criterion, and stepsize` are keyword arguments\n\n\n\n\n\n","category":"type"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"Additionally, the options share a DirectionUpdateRule, which chooses the current component, so they can be decorated further; The most inner one should always be the following one though.","category":"page"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"AlternatingGradient\nManopt.AlternatingGradientRule","category":"page"},{"location":"solvers/alternating_gradient_descent/#Manopt.AlternatingGradient","page":"Alternating Gradient Descent","title":"Manopt.AlternatingGradient","text":"AlternatingGradient(; kwargs...)\nAlternatingGradient(M::AbstractManifold; kwargs...)\n\nSpecify that a gradient based method should only update parts of the gradient in order to do a alternating gradient descent.\n\nKeyword arguments\n\ninitial_gradient=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\np=rand(M): a point on the manifold mathcal Mto specify the initial value\n\ninfo: Info\nThis function generates a ManifoldDefaultsFactory for AlternatingGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.\n\n\n\n\n\n","category":"function"},{"location":"solvers/alternating_gradient_descent/#Manopt.AlternatingGradientRule","page":"Alternating Gradient Descent","title":"Manopt.AlternatingGradientRule","text":"AlternatingGradientRule <: AbstractGradientGroupDirectionRule\n\nCreate a functor (problem, state k) -> (s,X) to evaluate the alternating gradient, that is alternating between the components of the gradient and has an field for partial evaluation of the gradient in-place.\n\nFields\n\nX::T: a tangent vector at the point p on the manifold mathcal M\n\nConstructor\n\nAlternatingGradientRule(M::AbstractManifold; p=rand(M), X=zero_vector(M, p))\n\nInitialize the alternating gradient processor with tangent vector type of X, where both M and p are just help variables.\n\nSee also\n\nalternating_gradient_descent, [AlternatingGradient])@ref)\n\n\n\n\n\n","category":"type"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"which internally uses","category":"page"},{"location":"solvers/alternating_gradient_descent/#sec-agd-technical-details","page":"Alternating Gradient Descent","title":"Technical details","text":"","category":"section"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"The alternating_gradient_descent solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"The problem has to be phrased on a ProductManifold, to be able to","category":"page"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"alternate between parts of the input.","category":"page"},{"location":"solvers/alternating_gradient_descent/","page":"Alternating Gradient Descent","title":"Alternating Gradient Descent","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nBy default alternating gradient descent uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#tCG","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint truncated conjugate gradient method","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"Solve the constraint optimization problem on the tangent space","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"beginalign*\noperatorname*argmin_Y T_pmathcalM m_p(Y) = f(p) +\noperatornamegradf(p) Y_p + frac12 mathcalH_pY Y_p\ntextsuch that lVert Y rVert_p Δ\nendalign*","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"on the tangent space T_pmathcal M of a Riemannian manifold mathcal M by using the Steihaug-Toint truncated conjugate-gradient (tCG) method, see [ABG06], Algorithm 2, and [CGT00]. Here mathcal H_p is either the Hessian operatornameHess f(p) or a linear symmetric operator on the tangent space approximating the Hessian.","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Interface","page":"Steihaug-Toint TCG Method","title":"Interface","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":" truncated_conjugate_gradient_descent\n truncated_conjugate_gradient_descent!","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.truncated_conjugate_gradient_descent","page":"Steihaug-Toint TCG Method","title":"Manopt.truncated_conjugate_gradient_descent","text":"truncated_conjugate_gradient_descent(M, f, grad_f, Hess_f, p=rand(M), X=rand(M); vector_at=p);\n kwargs...\n)\ntruncated_conjugate_gradient_descent(M, mho::ManifoldHessianObjective, p=rand(M), X=rand(M; vector_at=p);\n kwargs...\n)\ntruncated_conjugate_gradient_descent(M, trmo::TrustRegionModelObjective, p=rand(M), X=rand(M; vector_at=p);\n kwargs...\n)\n\nsolve the trust-region subproblem\n\nbeginalign*\noperatorname*argmin_Y T_pmathcalM m_p(Y) = f(p) +\noperatornamegradf(p) Y_p + frac12 mathcalH_pY Y_p\ntextsuch that lVert Y rVert_p Δ\nendalign*\n\non a manifold mathcal M by using the Steihaug-Toint truncated conjugate-gradient (tCG) method. This can be done inplace of X.\n\nFor a description of the algorithm and theorems offering convergence guarantees, see [ABG06, CGT00].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\nX: a tangent vector at the point p on the manifold mathcal M\n\nInstead of the three functions, you either provide a ManifoldHessianObjective mho which is then used to build the trust region model, or a TrustRegionModelObjective trmo directly.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\npreconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.\nθ=1.0: the superlinear convergence target rate of 1+θ\nκ=0.1: the linear convergence target rate.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nrandomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(manifold_dimension(base_manifold(Tpm)))|StopWhenResidualIsReducedByFactorOrPower(; κ=κ, θ=θ)|StopWhenTrustRegionIsExceeded()|StopWhenCurvatureIsNegative()|StopWhenModelIncreased(): a functor indicating that the stopping criterion is fulfilled\ntrust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\nSee also\n\ntrust_regions\n\n\n\n\n\n","category":"function"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.truncated_conjugate_gradient_descent!","page":"Steihaug-Toint TCG Method","title":"Manopt.truncated_conjugate_gradient_descent!","text":"truncated_conjugate_gradient_descent(M, f, grad_f, Hess_f, p=rand(M), X=rand(M); vector_at=p);\n kwargs...\n)\ntruncated_conjugate_gradient_descent(M, mho::ManifoldHessianObjective, p=rand(M), X=rand(M; vector_at=p);\n kwargs...\n)\ntruncated_conjugate_gradient_descent(M, trmo::TrustRegionModelObjective, p=rand(M), X=rand(M; vector_at=p);\n kwargs...\n)\n\nsolve the trust-region subproblem\n\nbeginalign*\noperatorname*argmin_Y T_pmathcalM m_p(Y) = f(p) +\noperatornamegradf(p) Y_p + frac12 mathcalH_pY Y_p\ntextsuch that lVert Y rVert_p Δ\nendalign*\n\non a manifold mathcal M by using the Steihaug-Toint truncated conjugate-gradient (tCG) method. This can be done inplace of X.\n\nFor a description of the algorithm and theorems offering convergence guarantees, see [ABG06, CGT00].\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\nHess_f: the (Riemannian) Hessian operatornameHessf: T{p}\\mathcal M → T{p}\\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place\np: a point on the manifold mathcal M\nX: a tangent vector at the point p on the manifold mathcal M\n\nInstead of the three functions, you either provide a ManifoldHessianObjective mho which is then used to build the trust region model, or a TrustRegionModelObjective trmo directly.\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\npreconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.\nθ=1.0: the superlinear convergence target rate of 1+θ\nκ=0.1: the linear convergence target rate.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nrandomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(manifold_dimension(base_manifold(Tpm)))|StopWhenResidualIsReducedByFactorOrPower(; κ=κ, θ=θ)|StopWhenTrustRegionIsExceeded()|StopWhenCurvatureIsNegative()|StopWhenModelIncreased(): a functor indicating that the stopping criterion is fulfilled\ntrust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\nSee also\n\ntrust_regions\n\n\n\n\n\n","category":"function"},{"location":"solvers/truncated_conjugate_gradient_descent/#State","page":"Steihaug-Toint TCG Method","title":"State","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"TruncatedConjugateGradientState","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.TruncatedConjugateGradientState","page":"Steihaug-Toint TCG Method","title":"Manopt.TruncatedConjugateGradientState","text":"TruncatedConjugateGradientState <: AbstractHessianSolverState\n\ndescribe the Steihaug-Toint truncated conjugate-gradient method, with\n\nFields\n\nLet T denote the type of a tangent vector and R <: Real.\n\nδ::T: the conjugate gradient search direction\nδHδ, YPδ, δPδ, YPδ: temporary inner products with Hδ and preconditioned inner products.\nHδ, HY: temporary results of the Hessian applied to δ and Y, respectively.\nκ::R: the linear convergence target rate.\nproject!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nrandomize: indicate whether X is initialised to a random vector or not\nresidual::T: the gradient of the model m(Y)\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nθ::R: the superlinear convergence target rate of 1+θ\ntrust_region_radius::R: the trust-region radius\nX::T: the gradient operatornamegradf(p)\nY::T: current iterate tangent vector\nz::T: the preconditioned residual\nz_r::R: inner product of the residual and z\n\nConstructor\n\nTruncatedConjugateGradientState(TpM::TangentSpace, Y=rand(TpM); kwargs...)\n\nInitialise the TCG state.\n\nInput\n\nTpM: a TangentSpace\n\nKeyword arguments\n\nκ=0.1\nproject!::F=copyto!: initialise the numerical stabilisation to just copy the result\nrandomize=false\nθ=1.0\ntrust_region_radius=injectivity_radius(base_manifold(TpM)) / 4\nstopping_criterion=StopAfterIteration(manifold_dimension(base_manifold(Tpm)))|StopWhenResidualIsReducedByFactorOrPower(; κ=κ, θ=θ)|StopWhenTrustRegionIsExceeded()|StopWhenCurvatureIsNegative()|StopWhenModelIncreased(): a functor indicating that the stopping criterion is fulfilled\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal M\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#Stopping-criteria","page":"Steihaug-Toint TCG Method","title":"Stopping criteria","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"StopWhenResidualIsReducedByFactorOrPower\nStopWhenTrustRegionIsExceeded\nStopWhenCurvatureIsNegative\nStopWhenModelIncreased\nManopt.set_parameter!(::StopWhenResidualIsReducedByFactorOrPower, ::Val{:ResidualPower}, ::Any)\nManopt.set_parameter!(::StopWhenResidualIsReducedByFactorOrPower, ::Val{:ResidualFactor}, ::Any)","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.StopWhenResidualIsReducedByFactorOrPower","page":"Steihaug-Toint TCG Method","title":"Manopt.StopWhenResidualIsReducedByFactorOrPower","text":"StopWhenResidualIsReducedByFactorOrPower <: StoppingCriterion\n\nA functor for testing if the norm of residual at the current iterate is reduced either by a power of 1+θ or by a factor κ compared to the norm of the initial residual. The criterion hence reads\n\nlVert r_k rVert_p lVert r_0 rVert_p^(0) min bigl( κ lVert r_0 rVert_p^(0) bigr).\n\nFields\n\nκ: the reduction factor\nθ: part of the reduction power\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\n\nConstructor\n\nStopWhenResidualIsReducedByFactorOrPower(; κ=0.1, θ=1.0)\n\nInitialize the StopWhenResidualIsReducedByFactorOrPower functor to indicate to stop after the norm of the current residual is lesser than either the norm of the initial residual to the power of 1+θ or the norm of the initial residual times κ.\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.StopWhenTrustRegionIsExceeded","page":"Steihaug-Toint TCG Method","title":"Manopt.StopWhenTrustRegionIsExceeded","text":"StopWhenTrustRegionIsExceeded <: StoppingCriterion\n\nA functor for testing if the norm of the next iterate in the Steihaug-Toint truncated conjugate gradient method is larger than the trust-region radius θ lVert Y^(k)^* rVert_p^(k) and to end the algorithm when the trust region has been left.\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\ntrr the trust region radius\nYPY the computed norm of Y.\n\nConstructor\n\nStopWhenTrustRegionIsExceeded()\n\ninitialize the StopWhenTrustRegionIsExceeded functor to indicate to stop after the norm of the next iterate is greater than the trust-region radius.\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.StopWhenCurvatureIsNegative","page":"Steihaug-Toint TCG Method","title":"Manopt.StopWhenCurvatureIsNegative","text":"StopWhenCurvatureIsNegative <: StoppingCriterion\n\nA functor for testing if the curvature of the model is negative, δ_k operatornameHess F(p)δ_k_p 0. In this case, the model is not strictly convex, and the stepsize as computed does not yield a reduction of the model.\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\nvalue store the value of the inner product.\nreason: stores a reason of stopping if the stopping criterion has been reached, see get_reason.\n\nConstructor\n\nStopWhenCurvatureIsNegative()\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.StopWhenModelIncreased","page":"Steihaug-Toint TCG Method","title":"Manopt.StopWhenModelIncreased","text":"StopWhenModelIncreased <: StoppingCriterion\n\nA functor for testing if the curvature of the model value increased.\n\nFields\n\nat_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;\nmodel_valuestre the last model value\ninc_model_value store the model value that increased\n\nConstructor\n\nStopWhenModelIncreased()\n\nSee also\n\ntruncated_conjugate_gradient_descent, trust_regions\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.set_parameter!-Tuple{StopWhenResidualIsReducedByFactorOrPower, Val{:ResidualPower}, Any}","page":"Steihaug-Toint TCG Method","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualPower, v)\n\nUpdate the residual Power θ to v.\n\n\n\n\n\n","category":"method"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.set_parameter!-Tuple{StopWhenResidualIsReducedByFactorOrPower, Val{:ResidualFactor}, Any}","page":"Steihaug-Toint TCG Method","title":"Manopt.set_parameter!","text":"set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualFactor, v)\n\nUpdate the residual Factor κ to v.\n\n\n\n\n\n","category":"method"},{"location":"solvers/truncated_conjugate_gradient_descent/#Trust-region-model","page":"Steihaug-Toint TCG Method","title":"Trust region model","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"TrustRegionModelObjective","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Manopt.TrustRegionModelObjective","page":"Steihaug-Toint TCG Method","title":"Manopt.TrustRegionModelObjective","text":"TrustRegionModelObjective{O<:AbstractManifoldHessianObjective} <: AbstractManifoldSubObjective{O}\n\nA trust region model of the form\n\n m(X) = f(p) + operatornamegrad f(p) X_p + frac1(2 operatornameHess f(p)X X_p\n\nFields\n\nobjective: an AbstractManifoldHessianObjective proving f, its gradient and Hessian\n\nConstructors\n\nTrustRegionModelObjective(objective)\n\nwith either an AbstractManifoldHessianObjective objective or an decorator containing such an objective\n\n\n\n\n\n","category":"type"},{"location":"solvers/truncated_conjugate_gradient_descent/#sec-tr-technical-details","page":"Steihaug-Toint TCG Method","title":"Technical details","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"The trust_regions solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"if you do not provide a trust_region_radius=, then injectivity_radius on the manifold M is required.\nthe norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nA zero_vector!(M,X,p).\nA `copyto!(M, q, p) and copy(M,p) for points.","category":"page"},{"location":"solvers/truncated_conjugate_gradient_descent/#Literature","page":"Steihaug-Toint TCG Method","title":"Literature","text":"","category":"section"},{"location":"solvers/truncated_conjugate_gradient_descent/","page":"Steihaug-Toint TCG Method","title":"Steihaug-Toint TCG Method","text":"P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).\n\n\n\nA. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).\n\n\n\n","category":"page"},{"location":"solvers/LevenbergMarquardt/#Levenberg-Marquardt","page":"Levenberg–Marquardt","title":"Levenberg-Marquardt","text":"","category":"section"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"LevenbergMarquardt\nLevenbergMarquardt!","category":"page"},{"location":"solvers/LevenbergMarquardt/#Manopt.LevenbergMarquardt","page":"Levenberg–Marquardt","title":"Manopt.LevenbergMarquardt","text":"LevenbergMarquardt(M, f, jacobian_f, p, num_components=-1)\nLevenbergMarquardt!(M, f, jacobian_f, p, num_components=-1; kwargs...)\n\nSolve an optimization problem of the form\n\noperatorname*argmin_p mathcal M frac12 lVert f(p) rVert^2\n\nwhere f mathcal M ℝ^d is a continuously differentiable function, using the Riemannian Levenberg-Marquardt algorithm [Pee93]. The implementation follows Algorithm 1 [AOT22]. The second signature performs the optimization in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal Mℝ^d\njacobian_f: the Jacobian of f. The Jacobian is supposed to accept a keyword argument basis_domain which specifies basis of the tangent space at a given point in which the Jacobian is to be calculated. By default it should be the DefaultOrthonormalBasis.\np: a point on the manifold mathcal M\nnum_components: length of the vector returned by the cost function (d). By default its value is -1 which means that it is determined automatically by calling f one additional time. This is only possible when evaluation is AllocatingEvaluation, for mutating evaluation this value must be explicitly specified.\n\nThese can also be passed as a NonlinearLeastSquaresObjective, then the keyword jacobian_tangent_basis below is ignored\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nη=0.2: scaling factor for the sufficient cost decrease threshold required to accept new proposal points. Allowed range: 0 < η < 1.\nexpect_zero_residual=false: whether or not the algorithm might expect that the value of residual (objective) at minimum is equal to 0.\ndamping_term_min=0.1: initial (and also minimal) value of the damping term\nβ=5.0: parameter by which the damping term is multiplied when the current new point is rejected\ninitial_jacobian_f: the initial Jacobian of the cost function f. By default this is a matrix of size num_components times the manifold dimension of similar type as p.\ninitial_residual_values: the initial residual vector of the cost function f. By default this is a vector of length num_components of similar type as p.\njacobian_tangent_basis: an AbstractBasis specify the basis of the tangent space for jacobian_f.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-12): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/LevenbergMarquardt/#Manopt.LevenbergMarquardt!","page":"Levenberg–Marquardt","title":"Manopt.LevenbergMarquardt!","text":"LevenbergMarquardt(M, f, jacobian_f, p, num_components=-1)\nLevenbergMarquardt!(M, f, jacobian_f, p, num_components=-1; kwargs...)\n\nSolve an optimization problem of the form\n\noperatorname*argmin_p mathcal M frac12 lVert f(p) rVert^2\n\nwhere f mathcal M ℝ^d is a continuously differentiable function, using the Riemannian Levenberg-Marquardt algorithm [Pee93]. The implementation follows Algorithm 1 [AOT22]. The second signature performs the optimization in-place of p.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal Mℝ^d\njacobian_f: the Jacobian of f. The Jacobian is supposed to accept a keyword argument basis_domain which specifies basis of the tangent space at a given point in which the Jacobian is to be calculated. By default it should be the DefaultOrthonormalBasis.\np: a point on the manifold mathcal M\nnum_components: length of the vector returned by the cost function (d). By default its value is -1 which means that it is determined automatically by calling f one additional time. This is only possible when evaluation is AllocatingEvaluation, for mutating evaluation this value must be explicitly specified.\n\nThese can also be passed as a NonlinearLeastSquaresObjective, then the keyword jacobian_tangent_basis below is ignored\n\nKeyword arguments\n\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.\nη=0.2: scaling factor for the sufficient cost decrease threshold required to accept new proposal points. Allowed range: 0 < η < 1.\nexpect_zero_residual=false: whether or not the algorithm might expect that the value of residual (objective) at minimum is equal to 0.\ndamping_term_min=0.1: initial (and also minimal) value of the damping term\nβ=5.0: parameter by which the damping term is multiplied when the current new point is rejected\ninitial_jacobian_f: the initial Jacobian of the cost function f. By default this is a matrix of size num_components times the manifold dimension of similar type as p.\ninitial_residual_values: the initial residual vector of the cost function f. By default this is a vector of length num_components of similar type as p.\njacobian_tangent_basis: an AbstractBasis specify the basis of the tangent space for jacobian_f.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-12): a functor indicating that the stopping criterion is fulfilled\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/LevenbergMarquardt/#Options","page":"Levenberg–Marquardt","title":"Options","text":"","category":"section"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"LevenbergMarquardtState","category":"page"},{"location":"solvers/LevenbergMarquardt/#Manopt.LevenbergMarquardtState","page":"Levenberg–Marquardt","title":"Manopt.LevenbergMarquardtState","text":"LevenbergMarquardtState{P,T} <: AbstractGradientSolverState\n\nDescribes a Gradient based descent algorithm, with\n\nFields\n\nA default value is given in brackets if a parameter can be left out in initialization.\n\np::P: a point on the manifold mathcal Mstoring the current iterate\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nresidual_values: value of F calculated in the solver setup or the previous iteration\nresidual_values_temp: value of F for the current proposal point\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\njacF: the current Jacobian of F\ngradient: the current gradient of F\nstep_vector: the tangent vector at x that is used to move to the next point\nlast_stepsize: length of step_vector\nη: Scaling factor for the sufficient cost decrease threshold required to accept new proposal points. Allowed range: 0 < η < 1.\ndamping_term: current value of the damping term\ndamping_term_min: initial (and also minimal) value of the damping term\nβ: parameter by which the damping term is multiplied when the current new point is rejected\nexpect_zero_residual: if true, the algorithm expects that the value of the residual (objective) at minimum is equal to 0.\n\nConstructor\n\nLevenbergMarquardtState(M, initial_residual_values, initial_jacF; kwargs...)\n\nGenerate the Levenberg-Marquardt solver state.\n\nKeyword arguments\n\nThe following fields are keyword arguments\n\nβ=5.0\ndamping_term_min=0.1\nη=0.2,\nexpect_zero_residual=false\ninitial_gradient=zero_vector(M, p)\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-12)|StopWhenStepsizeLess(1e-12): a functor indicating that the stopping criterion is fulfilled\n\nSee also\n\ngradient_descent, LevenbergMarquardt\n\n\n\n\n\n","category":"type"},{"location":"solvers/LevenbergMarquardt/#sec-lm-technical-details","page":"Levenberg–Marquardt","title":"Technical details","text":"","category":"section"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"The LevenbergMarquardt solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nthe norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nA `copyto!(M, q, p) and copy(M,p) for points.","category":"page"},{"location":"solvers/LevenbergMarquardt/#Literature","page":"Levenberg–Marquardt","title":"Literature","text":"","category":"section"},{"location":"solvers/LevenbergMarquardt/","page":"Levenberg–Marquardt","title":"Levenberg–Marquardt","text":"S. Adachi, T. Okuno and A. Takeda. Riemannian Levenberg-Marquardt Method with Global and Local Convergence Properties. ArXiv Preprint (2022).\n\n\n\nR. Peeters. On a Riemannian version of the Levenberg-Marquardt algorithm. Serie Research Memoranda 0011 (VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics, 1993).\n\n\n\n","category":"page"},{"location":"solvers/exact_penalty_method/#Exact-penalty-method","page":"Exact Penalty Method","title":"Exact penalty method","text":"","category":"section"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":" exact_penalty_method\n exact_penalty_method!","category":"page"},{"location":"solvers/exact_penalty_method/#Manopt.exact_penalty_method","page":"Exact Penalty Method","title":"Manopt.exact_penalty_method","text":"exact_penalty_method(M, f, grad_f, p=rand(M); kwargs...)\nexact_penalty_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\nexact_penalty_method!(M, f, grad_f, p; kwargs...)\nexact_penalty_method!(M, cmo::ConstrainedManifoldObjective, p; kwargs...)\n\nperform the exact penalty method (EPM) [LB19] The aim of the EPM is to find a solution of the constrained optimisation task\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nwhere M is a Riemannian manifold, and f, g_i_i=1^n and h_j_j=1^m are twice continuously differentiable functions from M to ℝ. For that a weighted L_1-penalty term for the violation of the constraints is added to the objective\n\nf(x) + ρbiggl( sum_i=1^m maxbigl0 g_i(x)bigr + sum_j=1^n vert h_j(x)vertbiggr)\n\nwhere ρ0 is the penalty parameter.\n\nSince this is non-smooth, a SmoothingTechnique with parameter u is applied, see the ExactPenaltyCost.\n\nIn every step k of the exact penalty method, the smoothed objective is then minimized over all p mathcal M. Then, the accuracy tolerance ϵ and the smoothing parameter u are updated by setting\n\nϵ^(k)=maxϵ_min θ_ϵ ϵ^(k-1)\n\nwhere ϵ_min is the lowest value ϵ is allowed to become and θ_ϵ (01) is constant scaling factor, and\n\nu^(k) = max u_min theta_u u^(k-1) \n\nwhere u_min is the lowest value u is allowed to become and θ_u (01) is constant scaling factor.\n\nFinally, the penalty parameter ρ is updated as\n\nρ^(k) = begincases\nρ^(k-1)θ_ρ textif displaystyle max_j mathcalEi mathcalI Bigl vert h_j(x^(k)) vert g_i(x^(k))Bigr geq u^(k-1) Bigr) \nρ^(k-1) textelse\nendcases\n\nwhere θ_ρ (01) is a constant scaling factor.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nif not called with the ConstrainedManifoldObjective cmo\n\ng=nothing: the inequality constraints\nh=nothing: the equality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\n\nNote that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.\n\nFurther keyword arguments\n\nϵ=1e–3: the accuracy tolerance\nϵ_exponent=1/100: exponent of the ϵ update factor;\nϵ_min=1e-6: the lower bound for the accuracy tolerance\nu=1e–1: the smoothing parameter and threshold for violation of the constraints\nu_exponent=1/100: exponent of the u update factor;\nu_min=1e-6: the lower bound for the smoothing parameter and threshold for violation of the constraints\nρ=1.0: the penalty parameter\nequality_constraints=nothing: the number n of equality constraints. If not provided, a call to the gradient of g is performed to estimate these.\ngradient_range=nothing: specify how both gradients of the constraints are represented\ngradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.\ngradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.\ninequality_constraints=nothing: the number m of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.\nmin_stepsize=1e-10: the minimal step size\nsmoothing=LogarithmicSumOfExponentials: a SmoothingTechnique to use\nsub_cost=ExactPenaltyCost(problem, ρ, u; smoothing=smoothing): cost to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_grad=ExactPenaltyGrad(problem, ρ, u; smoothing=smoothing): gradient to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_stopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(ϵ)|StopWhenStepsizeLess(1e-10): a stopping cirterion for the sub solver This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_state=DefaultManoptProblem(M,ManifoldGradientObjective`(subcost, subgrad; evaluation=evaluation): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used\nstopping_criterion=StopAfterIteration(300)|(StopWhenSmallerOrEqual(ϵ, ϵ_min)&StopWhenChangeLess(1e-10) ): a functor indicating that the stopping criterion is fulfilled\n\nFor the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/exact_penalty_method/#Manopt.exact_penalty_method!","page":"Exact Penalty Method","title":"Manopt.exact_penalty_method!","text":"exact_penalty_method(M, f, grad_f, p=rand(M); kwargs...)\nexact_penalty_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)\nexact_penalty_method!(M, f, grad_f, p; kwargs...)\nexact_penalty_method!(M, cmo::ConstrainedManifoldObjective, p; kwargs...)\n\nperform the exact penalty method (EPM) [LB19] The aim of the EPM is to find a solution of the constrained optimisation task\n\nbeginaligned\nmin_p mathcal M f(p)\ntextsubject toquadg_i(p) 0 quad text for i= 1 m\nquad h_j(p)=0 quad text for j=1n\nendaligned\n\nwhere M is a Riemannian manifold, and f, g_i_i=1^n and h_j_j=1^m are twice continuously differentiable functions from M to ℝ. For that a weighted L_1-penalty term for the violation of the constraints is added to the objective\n\nf(x) + ρbiggl( sum_i=1^m maxbigl0 g_i(x)bigr + sum_j=1^n vert h_j(x)vertbiggr)\n\nwhere ρ0 is the penalty parameter.\n\nSince this is non-smooth, a SmoothingTechnique with parameter u is applied, see the ExactPenaltyCost.\n\nIn every step k of the exact penalty method, the smoothed objective is then minimized over all p mathcal M. Then, the accuracy tolerance ϵ and the smoothing parameter u are updated by setting\n\nϵ^(k)=maxϵ_min θ_ϵ ϵ^(k-1)\n\nwhere ϵ_min is the lowest value ϵ is allowed to become and θ_ϵ (01) is constant scaling factor, and\n\nu^(k) = max u_min theta_u u^(k-1) \n\nwhere u_min is the lowest value u is allowed to become and θ_u (01) is constant scaling factor.\n\nFinally, the penalty parameter ρ is updated as\n\nρ^(k) = begincases\nρ^(k-1)θ_ρ textif displaystyle max_j mathcalEi mathcalI Bigl vert h_j(x^(k)) vert g_i(x^(k))Bigr geq u^(k-1) Bigr) \nρ^(k-1) textelse\nendcases\n\nwhere θ_ρ (01) is a constant scaling factor.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nif not called with the ConstrainedManifoldObjective cmo\n\ng=nothing: the inequality constraints\nh=nothing: the equality constraints\ngrad_g=nothing: the gradient of the inequality constraints\ngrad_h=nothing: the gradient of the equality constraints\n\nNote that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.\n\nFurther keyword arguments\n\nϵ=1e–3: the accuracy tolerance\nϵ_exponent=1/100: exponent of the ϵ update factor;\nϵ_min=1e-6: the lower bound for the accuracy tolerance\nu=1e–1: the smoothing parameter and threshold for violation of the constraints\nu_exponent=1/100: exponent of the u update factor;\nu_min=1e-6: the lower bound for the smoothing parameter and threshold for violation of the constraints\nρ=1.0: the penalty parameter\nequality_constraints=nothing: the number n of equality constraints. If not provided, a call to the gradient of g is performed to estimate these.\ngradient_range=nothing: specify how both gradients of the constraints are represented\ngradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.\ngradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.\ninequality_constraints=nothing: the number m of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.\nmin_stepsize=1e-10: the minimal step size\nsmoothing=LogarithmicSumOfExponentials: a SmoothingTechnique to use\nsub_cost=ExactPenaltyCost(problem, ρ, u; smoothing=smoothing): cost to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_grad=ExactPenaltyGrad(problem, ρ, u; smoothing=smoothing): gradient to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.\nsub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.\nsub_stopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(ϵ)|StopWhenStepsizeLess(1e-10): a stopping cirterion for the sub solver This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.\nsub_state=DefaultManoptProblem(M,ManifoldGradientObjective`(subcost, subgrad; evaluation=evaluation): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nsub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used\nstopping_criterion=StopAfterIteration(300)|(StopWhenSmallerOrEqual(ϵ, ϵ_min)&StopWhenChangeLess(1e-10) ): a functor indicating that the stopping criterion is fulfilled\n\nFor the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/exact_penalty_method/#State","page":"Exact Penalty Method","title":"State","text":"","category":"section"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"ExactPenaltyMethodState","category":"page"},{"location":"solvers/exact_penalty_method/#Manopt.ExactPenaltyMethodState","page":"Exact Penalty Method","title":"Manopt.ExactPenaltyMethodState","text":"ExactPenaltyMethodState{P,T} <: AbstractManoptSolverState\n\nDescribes the exact penalty method, with\n\nFields\n\nϵ: the accuracy tolerance\nϵ_min: the lower bound for the accuracy tolerance\np::P: a point on the manifold mathcal Mstoring the current iterate\nρ: the penalty parameter\nsub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.\nsub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nu: the smoothing parameter and threshold for violation of the constraints\nu_min: the lower bound for the smoothing parameter and threshold for violation of the constraints\nθ_ϵ: the scaling factor of the tolerance parameter\nθ_ρ: the scaling factor of the penalty parameter\nθ_u: the scaling factor of the smoothing parameter\n\nConstructor\n\nExactPenaltyMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)\n\nconstruct the exact penalty state.\n\nExactPenaltyMethodState(M::AbstractManifold, sub_problem;\n evaluation=AllocatingEvaluation(), kwargs...\n\n)\n\nconstruct the exact penalty state, where sub_problem is a closed form solution with evaluation as type of evaluation.\n\nKeyword arguments\n\nϵ=1e-3\nϵ_min=1e-6\nϵ_exponent=1 / 100: a shortcut for the scaling factor θ_ϵ\nθ_ϵ=(ϵ_min / ϵ)^(ϵ_exponent)\nu=1e-1\nu_min=1e-6\nu_exponent=1 / 100: a shortcut for the scaling factor θ_u.\nθ_u=(u_min / u)^(u_exponent)\np=rand(M): a point on the manifold mathcal Mto specify the initial value\nρ=1.0\nθ_ρ=0.3\nstopping_criterion=StopAfterIteration(300)|(: a functor indicating that the stopping criterion is fulfilled StopWhenSmallerOrEqual(:ϵ, ϵ_min)|StopWhenChangeLess(1e-10) )\n\nSee also\n\nexact_penalty_method\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#Helping-functions","page":"Exact Penalty Method","title":"Helping functions","text":"","category":"section"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"ExactPenaltyCost\nExactPenaltyGrad\nSmoothingTechnique\nLinearQuadraticHuber\nLogarithmicSumOfExponentials","category":"page"},{"location":"solvers/exact_penalty_method/#Manopt.ExactPenaltyCost","page":"Exact Penalty Method","title":"Manopt.ExactPenaltyCost","text":"ExactPenaltyCost{S, Pr, R}\n\nRepresent the cost of the exact penalty method based on a ConstrainedManifoldObjective P and a parameter ρ given by\n\nf(p) + ρBigl(\n sum_i=0^m max0g_i(p) + sum_j=0^n lvert h_j(p)rvert\nBigr)\n\nwhere an additional parameter u is used as well as a smoothing technique, for example LogarithmicSumOfExponentials or LinearQuadraticHuber to obtain a smooth cost function. This struct is also a functor (M,p) -> v of the cost v.\n\nFields\n\nρ, u: as described in the mathematical formula, .\nco: the original cost\n\nConstructor\n\nExactPenaltyCost(co::ConstrainedManifoldObjective, ρ, u; smoothing=LinearQuadraticHuber())\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#Manopt.ExactPenaltyGrad","page":"Exact Penalty Method","title":"Manopt.ExactPenaltyGrad","text":"ExactPenaltyGrad{S, CO, R}\n\nRepresent the gradient of the ExactPenaltyCost based on a ConstrainedManifoldObjective co and a parameter ρ and a smoothing technique, which uses an additional parameter u.\n\nThis struct is also a functor in both formats\n\n(M, p) -> X to compute the gradient in allocating fashion.\n(M, X, p) to compute the gradient in in-place fashion.\n\nFields\n\nρ, u as stated before\nco the nonsmooth objective\n\nConstructor\n\nExactPenaltyGradient(co::ConstrainedManifoldObjective, ρ, u; smoothing=LinearQuadraticHuber())\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#Manopt.SmoothingTechnique","page":"Exact Penalty Method","title":"Manopt.SmoothingTechnique","text":"abstract type SmoothingTechnique\n\nSpecify a smoothing technique, see for example ExactPenaltyCost and ExactPenaltyGrad.\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#Manopt.LinearQuadraticHuber","page":"Exact Penalty Method","title":"Manopt.LinearQuadraticHuber","text":"LinearQuadraticHuber <: SmoothingTechnique\n\nSpecify a smoothing based on max0x mathcal P(xu) for some u, where\n\nmathcal P(x u) = begincases\n 0 text if x leq 0\n fracx^22u text if 0 leq x leq u\n x-fracu2 text if x geq u\nendcases\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#Manopt.LogarithmicSumOfExponentials","page":"Exact Penalty Method","title":"Manopt.LogarithmicSumOfExponentials","text":"LogarithmicSumOfExponentials <: SmoothingTechnique\n\nSpecify a smoothing based on maxab u log(mathrme^fracau+mathrme^fracbu) for some u.\n\n\n\n\n\n","category":"type"},{"location":"solvers/exact_penalty_method/#sec-dr-technical-details","page":"Exact Penalty Method","title":"Technical details","text":"","category":"section"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"The exact_penalty_method solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"A `copyto!(M, q, p) and copy(M,p) for points.\nEverything the subsolver requires, which by default is the quasi_Newton method\nA zero_vector(M,p).","category":"page"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"The stopping criteria involves StopWhenChangeLess and StopWhenGradientNormLess which require","category":"page"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for mathcal N) does not have to be specified or the distance(M, p, q) for said default inverse retraction.\nthe norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.","category":"page"},{"location":"solvers/exact_penalty_method/#Literature","page":"Exact Penalty Method","title":"Literature","text":"","category":"section"},{"location":"solvers/exact_penalty_method/","page":"Exact Penalty Method","title":"Exact Penalty Method","text":"C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.\n\n\n\n","category":"page"},{"location":"plans/#sec-plan","page":"Specify a Solver","title":"Plans for solvers","text":"","category":"section"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"For any optimisation performed in Manopt.jl information is required about both the optimisation task or “problem” at hand as well as the solver and all its parameters. This together is called a plan in Manopt.jl and it consists of two data structures:","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"The Manopt Problem describes all static data of a task, most prominently the manifold and the objective.\nThe Solver State describes all varying data and parameters for the solver that is used. This also means that each solver has its own data structure for the state.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"By splitting these two parts, one problem can be define an then be solved using different solvers.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Still there might be the need to set certain parameters within any of these structures. For that there is","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"set_parameter!\nget_parameter\nManopt.status_summary","category":"page"},{"location":"plans/#Manopt.set_parameter!","page":"Specify a Solver","title":"Manopt.set_parameter!","text":"set_parameter!(f, element::Symbol , args...)\n\nFor any f and a Symbol e, dispatch on its value so by default, to set some args... in f or one of uts sub elements.\n\n\n\n\n\nset_parameter!(element::Symbol, value::Union{String,Bool,<:Number})\n\nSet global Manopt parameters addressed by a symbol element. W This first dispatches on the value of element.\n\nThe parameters are stored to the global settings using Preferences.jl.\n\nPassing a value of \"\" deletes the corresponding entry from the preferences. Whenever the LocalPreferences.toml is modified, this is also issued as an @info.\n\n\n\n\n\nset_parameter!(amo::AbstractManifoldObjective, element::Symbol, args...)\n\nSet a certain args... from the AbstractManifoldObjective amo to value. This function should dispatch onVal(element)`.\n\nCurrently supported\n\n:Cost passes to the get_cost_function\n:Gradient passes to the get_gradient_function\n\n\n\n\n\nset_parameter!(ams::AbstractManoptProblem, element::Symbol, field::Symbol , value)\n\nSet a certain field/element from the AbstractManoptProblem ams to value. This function usually dispatches on Val(element). Instead of a single field, also a chain of elements can be provided, allowing to access encapsulated parts of the problem.\n\nMain values for element are :Manifold and :Objective.\n\n\n\n\n\nset_parameter!(ams::DebugSolverState, ::Val{:Debug}, args...)\n\nSet certain values specified by args... into the elements of the debugDictionary\n\n\n\n\n\nset_parameter!(ams::RecordSolverState, ::Val{:Record}, args...)\n\nSet certain values specified by args... into the elements of the recordDictionary\n\n\n\n\n\nset_parameter!(c::StopAfter, :MaxTime, v::Period)\n\nUpdate the time period after which an algorithm shall stop.\n\n\n\n\n\nset_parameter!(c::StopAfterIteration, :;MaxIteration, v::Int)\n\nUpdate the number of iterations after which the algorithm should stop.\n\n\n\n\n\nset_parameter!(c::StopWhenChangeLess, :MinIterateChange, v::Int)\n\nUpdate the minimal change below which an algorithm shall stop.\n\n\n\n\n\nset_parameter!(c::StopWhenCostLess, :MinCost, v)\n\nUpdate the minimal cost below which the algorithm shall stop\n\n\n\n\n\nset_parameter!(c::StopWhenEntryChangeLess, :Threshold, v)\n\nUpdate the minimal cost below which the algorithm shall stop\n\n\n\n\n\nset_parameter!(c::StopWhenGradientChangeLess, :MinGradientChange, v)\n\nUpdate the minimal change below which an algorithm shall stop.\n\n\n\n\n\nset_parameter!(c::StopWhenGradientNormLess, :MinGradNorm, v::Float64)\n\nUpdate the minimal gradient norm when an algorithm shall stop\n\n\n\n\n\nset_parameter!(c::StopWhenStepsizeLess, :MinStepsize, v)\n\nUpdate the minimal step size below which the algorithm shall stop\n\n\n\n\n\nset_parameter!(c::StopWhenSubgradientNormLess, :MinSubgradNorm, v::Float64)\n\nUpdate the minimal subgradient norm when an algorithm shall stop\n\n\n\n\n\nset_parameter!(ams::AbstractManoptSolverState, element::Symbol, args...)\n\nSet a certain field or semantic element from the AbstractManoptSolverState ams to value. This function passes to Val(element) and specific setters should dispatch on Val{element}.\n\nBy default, this function just does nothing.\n\n\n\n\n\nset_parameter!(ams::DebugSolverState, ::Val{:SubProblem}, args...)\n\nSet certain values specified by args... to the sub problem.\n\n\n\n\n\nset_parameter!(ams::DebugSolverState, ::Val{:SubState}, args...)\n\nSet certain values specified by args... to the sub state.\n\n\n\n\n\nset_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualPower, v)\n\nUpdate the residual Power θ to v.\n\n\n\n\n\nset_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualFactor, v)\n\nUpdate the residual Factor κ to v.\n\n\n\n\n\n","category":"function"},{"location":"plans/#Manopt.get_parameter","page":"Specify a Solver","title":"Manopt.get_parameter","text":"get_parameter(f, element::Symbol, args...)\n\nAccess arbitrary parameters from f addressed by a symbol element.\n\nFor any f and a Symbol e dispatch on its value by default, to get some element from f potentially further qualified by args....\n\nThis functions returns nothing if f does not have the property element\n\n\n\n\n\nget_parameter(element::Symbol; default=nothing)\n\nAccess global Manopt parameters addressed by a symbol element. This first dispatches on the value of element.\n\nIf the value is not set, default is returned.\n\nThe parameters are queried from the global settings using Preferences.jl, so they are persistent within your activated Environment.\n\nCurrently used settings\n\n:Mode the mode can be set to \"Tutorial\" to get several hints especially in scenarios, where the optimisation on manifolds is different from the usual “experience” in (classical, Euclidean) optimization. Any other value has the same effect as not setting it.\n\n\n\n\n\n","category":"function"},{"location":"plans/#Manopt.status_summary","page":"Specify a Solver","title":"Manopt.status_summary","text":"status_summary(e)\n\nReturn a string reporting about the current status of e, where e is a type from Manopt.\n\nThis method is similar to show but just returns a string. It might also be more verbose in explaining, or hide internal information.\n\n\n\n\n\n","category":"function"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"The following symbols are used.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Symbol Used in Description\n:Activity DebugWhenActive activity of the debug action stored within\n:Basepoint TangentSpace the point the tangent space is at\n:Cost generic the cost function (within an objective, as pass down)\n:Debug DebugSolverState the stored debugDictionary\n:Gradient generic the gradient function (within an objective, as pass down)\n:Iterate generic the (current) iterate, similar to set_iterate!, within a state\n:Manifold generic the manifold (within a problem, as pass down)\n:Objective generic the objective (within a problem, as pass down)\n:SubProblem generic the sub problem (within a state, as pass down)\n:SubState generic the sub state (within a state, as pass down)\n:λ ProximalDCCost, ProximalDCGrad set the proximal parameter within the proximal sub objective elements\n:Population ParticleSwarmState a certain population of points, for example particle_swarms swarm\n:Record RecordSolverState \n:TrustRegionRadius TrustRegionsState the trust region radius, equivalent to :σ\n:ρ, :u ExactPenaltyCost, ExactPenaltyGrad Parameters within the exact penalty objective\n:ρ, :μ, :λ AugmentedLagrangianCost, AugmentedLagrangianGrad Parameters of the Lagrangian function\n:p, :X LinearizedDCCost, LinearizedDCGrad Parameters withing the linearized functional used for the sub problem of the difference of convex algorithm","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Any other lower case name or letter as well as single upper case letters access fields of the corresponding first argument. for example :p could be used to access the field s.p of a state. This is often, where the iterate is stored, so the recommended way is to use :Iterate from before.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Since the iterate is often stored in the states fields s.p one could access the iterate often also with :p and similarly the gradient with :X. This is discouraged for both readability as well as to stay more generic, and it is recommended to use :Iterate and :Gradient instead in generic settings.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"You can further activate a “Tutorial” mode by set_parameter!(:Mode, \"Tutorial\"). Internally, the following convenience function is available.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Manopt.is_tutorial_mode","category":"page"},{"location":"plans/#Manopt.is_tutorial_mode","page":"Specify a Solver","title":"Manopt.is_tutorial_mode","text":"is_tutorial_mode()\n\nA small internal helper to indicate whether tutorial mode is active.\n\nYou can set the mode by calling set_parameter!(:Mode, \"Tutorial\") or deactivate it by set_parameter!(:Mode, \"\").\n\n\n\n\n\n","category":"function"},{"location":"plans/#A-factory-for-providing-manifold-defaults","page":"Specify a Solver","title":"A factory for providing manifold defaults","text":"","category":"section"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"In several cases a manifold might not yet be known at the time a (keyword) argument should be provided. Therefore, any type with a manifold default can be wrapped into a factory.","category":"page"},{"location":"plans/","page":"Specify a Solver","title":"Specify a Solver","text":"Manopt.ManifoldDefaultsFactory\nManopt._produce_type","category":"page"},{"location":"plans/#Manopt.ManifoldDefaultsFactory","page":"Specify a Solver","title":"Manopt.ManifoldDefaultsFactory","text":"ManifoldDefaultsFactory{M,T,A,K}\n\nA generic factory to postpone the instantiation of certain types from within Manopt.jl, in order to be able to adapt it to defaults from different manifolds and/or postpone the decission on which manifold to use to a later point\n\nFor now this is established for\n\nDirectionUpdateRules (TODO: WIP)\nStepsize (TODO: WIP)\nStoppingCriterion (TODO:WIP)\n\nThis factory stores necessary and optional parameters as well as keyword arguments provided by the user to later produce the type this factory is for.\n\nBesides a manifold as a fallback, the factory can also be used for the (maybe simpler) types from the list of types that do not require the manifold.\n\nFields\n\nM::Union{Nothing,AbstractManifold}: provide a manifold for defaults\nargs::A: arguments (args...) that are passed to the type constructor\nkwargs::K: keyword arguments (kwargs...) that are passed to the type constructor\nconstructor_requires_manifold::Bool: indicate whether the type construtor requires the manifold or not\n\nConstructor\n\nManifoldDefaultsFactory(T, args...; kwargs...)\nManifoldDefaultsFactory(T, M, args...; kwargs...)\n\nInput\n\nT a subtype of types listed above that this factory is to produce\nM (optional) a manifold used for the defaults in case no manifold is provided.\nargs... arguments to pass to the constructor of T\nkwargs... keyword arguments to pass (overwrite) when constructing T.\n\nKeyword arguments\n\nrequires_manifold=true: indicate whether the type constructor this factory wraps requires the manifold as first argument or not.\n\nAll other keyword arguments are internally stored to be used in the type constructor\n\nas well as arguments and keyword arguments for the update rule.\n\nsee also\n\n_produce_type\n\n\n\n\n\n","category":"type"},{"location":"plans/#Manopt._produce_type","page":"Specify a Solver","title":"Manopt._produce_type","text":"_produce_type(t::T, M::AbstractManifold)\n_produce_type(t::ManifoldDefaultsFactory{T}, M::AbstractManifold)\n\nUse the ManifoldDefaultsFactory{T} to produce an instance of type T. This acts transparent in the way that if you provide an instance t::T already, this will just be returned.\n\n\n\n\n\n","category":"function"},{"location":"tutorials/ConstrainedOptimization/#How-to-do-constrained-optimization","page":"Do constrained optimization","title":"How to do constrained optimization","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Ronny Bergmann","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"This tutorial is a short introduction to using solvers for constraint optimisation in Manopt.jl.","category":"page"},{"location":"tutorials/ConstrainedOptimization/#Introduction","page":"Do constrained optimization","title":"Introduction","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"A constraint optimisation problem is given by","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"tagP\nbeginalign*\noperatorname*argmin_pmathcal M f(p)\ntextsuch that quad g(p) leq 0\nquad h(p) = 0\nendalign*","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"where f mathcal M ℝ is a cost function, and g mathcal M ℝ^m and h mathcal M ℝ^n are the inequality and equality constraints, respectively. The leq and = in (P) are meant element-wise.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"This can be seen as a balance between moving constraints into the geometry of a manifold mathcal M and keeping some, since they can be handled well in algorithms, see [BH19], [LB19] for details.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"using Distributions, LinearAlgebra, Manifolds, Manopt, Random\nRandom.seed!(42);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"In this tutorial we want to look at different ways to specify the problem and its implications. We start with specifying an example problems to illustrate the different available forms.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We consider the problem of a Nonnegative PCA, cf. Section 5.1.2 in [LB19]","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"let v_0 ℝ^d, lVert v_0 rVert=1 be given spike signal, that is a signal that is sparse with only s=lfloor δd rfloor nonzero entries.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Z = sqrtσ v_0v_0^mathrmT+N","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"where sigma is a signal-to-noise ratio and N is a matrix with random entries, where the diagonal entries are distributed with zero mean and standard deviation 1d on the off-diagonals and 2d on the diagonal","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"d = 150; # dimension of v0\nσ = 0.1^2; # SNR\nδ = 0.1; sp = Int(floor(δ * d)); # Sparsity\nS = sample(1:d, sp; replace=false);\nv0 = [i ∈ S ? 1 / sqrt(sp) : 0.0 for i in 1:d];\nN = rand(Normal(0, 1 / d), (d, d)); N[diagind(N, 0)] .= rand(Normal(0, 2 / d), d);\nZ = Z = sqrt(σ) * v0 * transpose(v0) + N;","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"In order to recover v_0 we consider the constrained optimisation problem on the sphere mathcal S^d-1 given by","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"beginalign*\noperatorname*argmin_pmathcal S^d-1 -p^mathrmTZp^mathrmT\ntextsuch that quad p geq 0\nendalign*","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"or in the previous notation f(p) = -p^mathrmTZp^mathrmT and g(p) = -p. We first initialize the manifold under consideration","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"M = Sphere(d - 1)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Sphere(149, ℝ)","category":"page"},{"location":"tutorials/ConstrainedOptimization/#A-first-augmented-Lagrangian-run","page":"Do constrained optimization","title":"A first augmented Lagrangian run","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We first defined f and g as usual functions","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, p) = -transpose(p) * Z * p;\ng(M, p) = -p;","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"since f is a functions defined in the embedding ℝ^d as well, we obtain its gradient by projection.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"grad_f(M, p) = project(M, p, -transpose(Z) * p - Z * p);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"For the constraints this is a little more involved, since each function g_i=g(p)_i=p_i has to return its own gradient. These are again in the embedding just operatornamegrad g_i(p) = -e_i the i th unit vector. We can project these again onto the tangent space at p:","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"grad_g(M, p) = project.(\n Ref(M), Ref(p), [[i == j ? -1.0 : 0.0 for j in 1:d] for i in 1:d]\n);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We further start in a random point:","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"p0 = rand(M);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Let’s verify a few things for the initial point","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, p0)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"0.005667399180991248","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"How much the function g is positive","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum(g(M, p0))","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"0.17885478285466855","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Now as a first method we can just call the Augmented Lagrangian Method with a simple call:","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"@time v1 = augmented_Lagrangian_method(\n M, f, grad_f, p0; g=g, grad_g=grad_g,\n debug=[:Iteration, :Cost, :Stop, \" | \", (:Change, \"Δp : %1.5e\"), 20, \"\\n\"],\n stopping_criterion = StopAfterIteration(300) | (\n StopWhenSmallerOrEqual(:ϵ, 1e-5) & StopWhenChangeLess(M, 1e-8)\n )\n);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Initial f(x): 0.005667 | \n# 20 f(x): -0.123557 | Δp : 1.00133e+00\n# 40 f(x): -0.123557 | Δp : 3.77088e-08\n# 60 f(x): -0.123557 | Δp : 2.40619e-05\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-5).\nAt iteration 68 the algorithm performed a step with a change (7.600544776224794e-11) less than 9.77237220955808e-6.\n 6.361862 seconds (18.72 M allocations: 1.484 GiB, 5.99% gc time, 97.60% compilation time)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Now we have both a lower function value and the point is nearly within the constraints, namely up to numerical inaccuracies","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, v1)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-0.12353580883894738","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum( g(M, v1) )","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"4.577229036010474e-12","category":"page"},{"location":"tutorials/ConstrainedOptimization/#A-faster-augmented-Lagrangian-run","page":"Do constrained optimization","title":"A faster augmented Lagrangian run","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Now this is a little slow, so we can modify two things:","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Gradients should be evaluated in place, so for example","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"grad_f!(M, X, p) = project!(M, X, p, -transpose(Z) * p - Z * p);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"The constraints are currently always evaluated all together, since the function grad_g always returns a vector of gradients. We first change the constraints function into a vector of functions. We further change the gradient both into a vector of gradient functions operatornamegrad g_ii=1ldotsd, as well as gradients that are computed in place.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"g2 = [(M, p) -> -p[i] for i in 1:d];\ngrad_g2! = [\n (M, X, p) -> project!(M, X, p, [i == j ? -1.0 : 0.0 for j in 1:d]) for i in 1:d\n];","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We obtain","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"@time v2 = augmented_Lagrangian_method(\n M, f, grad_f!, p0; g=g2, grad_g=grad_g2!, evaluation=InplaceEvaluation(),\n debug=[:Iteration, :Cost, :Stop, \" | \", (:Change, \"Δp : %1.5e\"), 20, \"\\n\"],\n stopping_criterion = StopAfterIteration(300) | (\n StopWhenSmallerOrEqual(:ϵ, 1e-5) & StopWhenChangeLess(M, 1e-8)\n )\n );","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Initial f(x): 0.005667 | \n# 20 f(x): -0.123557 | Δp : 1.00133e+00\n# 40 f(x): -0.123557 | Δp : 3.77088e-08\n# 60 f(x): -0.123557 | Δp : 2.40619e-05\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-5).\nAt iteration 68 the algorithm performed a step with a change (7.600544776224794e-11) less than 9.77237220955808e-6.\n 2.529631 seconds (7.30 M allocations: 743.027 MiB, 3.27% gc time, 95.00% compilation time)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"As a technical remark: note that (by default) the change to InplaceEvaluations affects both the constrained solver as well as the inner solver of the subproblem in each iteration.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, v2)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-0.12353580883894738","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum(g(M, v2))","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"4.577229036010474e-12","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"These are the very similar to the previous values but the solver took much less time and less memory allocations.","category":"page"},{"location":"tutorials/ConstrainedOptimization/#Exact-penalty-method","page":"Do constrained optimization","title":"Exact penalty method","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"As a second solver, we have the Exact Penalty Method, which currently is available with two smoothing variants, which make an inner solver for smooth optimization, that is by default again [quasi Newton] possible: LogarithmicSumOfExponentials and LinearQuadraticHuber. We compare both here as well. The first smoothing technique is the default, so we can just call","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"@time v3 = exact_penalty_method(\n M, f, grad_f!, p0; g=g2, grad_g=grad_g2!, evaluation=InplaceEvaluation(),\n debug=[:Iteration, :Cost, :Stop, \" | \", :Change, 50, \"\\n\"],\n);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Initial f(x): 0.005667 | \n# 50 f(x): -0.122792 | Last Change: 0.982159\n# 100 f(x): -0.123555 | Last Change: 0.013515\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 102 the algorithm performed a step with a change (3.0244885037602495e-7) less than 1.0e-6.\n 2.873317 seconds (14.51 M allocations: 4.764 GiB, 9.05% gc time, 65.31% compilation time)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We obtain a similar cost value as for the Augmented Lagrangian Solver from before, but here the constraint is actually fulfilled and not just numerically “on the boundary”.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, v3)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-0.12355544268449432","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum(g(M, v3))","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-3.589798060999793e-6","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"The second smoothing technique is often beneficial, when we have a lot of constraints (in the previously mentioned vectorial manner), since we can avoid several gradient evaluations for the constraint functions here. This leads to a faster iteration time.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"@time v4 = exact_penalty_method(\n M, f, grad_f!, p0; g=g2, grad_g=grad_g2!,\n evaluation=InplaceEvaluation(),\n smoothing=LinearQuadraticHuber(),\n debug=[:Iteration, :Cost, :Stop, \" | \", :Change, 50, \"\\n\"],\n);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Initial f(x): 0.005667 | \n# 50 f(x): -0.123559 | Last Change: 0.008024\n# 100 f(x): -0.123557 | Last Change: 0.000026\nThe value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).\nAt iteration 101 the algorithm performed a step with a change (1.0069976577931588e-8) less than 1.0e-6.\n 2.168971 seconds (9.44 M allocations: 2.176 GiB, 6.07% gc time, 83.55% compilation time)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"For the result we see the same behaviour as for the other smoothing.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, v4)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-0.12355667846565418","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum(g(M, v4))","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"2.6974802196316014e-8","category":"page"},{"location":"tutorials/ConstrainedOptimization/#Comparing-to-the-unconstrained-solver","page":"Do constrained optimization","title":"Comparing to the unconstrained solver","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"We can compare this to the global optimum on the sphere, which is the unconstrained optimisation problem, where we can just use Quasi Newton.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Note that this is much faster, since every iteration of the algorithm does a quasi-Newton call as well.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"@time w1 = quasi_Newton(\n M, f, grad_f!, p0; evaluation=InplaceEvaluation()\n);","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":" 0.743936 seconds (1.92 M allocations: 115.373 MiB, 2.10% gc time, 99.02% compilation time)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"f(M, w1)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"-0.13990874034056555","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"But for sure here the constraints here are not fulfilled and we have quite positive entries in g(w_1)","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"maximum(g(M, w1))","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"0.11803200739746737","category":"page"},{"location":"tutorials/ConstrainedOptimization/#Technical-details","page":"Do constrained optimization","title":"Technical details","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"This tutorial is cached. It was last run on the following package versions.","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"using Pkg\nPkg.status()","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`\n [6e4b80f9] BenchmarkTools v1.5.0\n⌅ [5ae59095] Colors v0.12.11\n [31c24e10] Distributions v0.25.113\n [26cc04aa] FiniteDifferences v0.12.32\n [7073ff75] IJulia v1.26.0\n [8ac3fa9e] LRUCache v1.6.1\n [af67fdf4] ManifoldDiff v0.3.13\n [1cead3c2] Manifolds v0.10.7\n [3362f125] ManifoldsBase v0.15.22\n [0fc0a36d] Manopt v0.5.3 `~/work/Manopt.jl/Manopt.jl`\n [91a5bcdd] Plots v1.40.9\n [731186ca] RecursiveArrayTools v3.27.4\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"using Dates\nnow()","category":"page"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"2024-11-21T20:36:34.360","category":"page"},{"location":"tutorials/ConstrainedOptimization/#Literature","page":"Do constrained optimization","title":"Literature","text":"","category":"section"},{"location":"tutorials/ConstrainedOptimization/","page":"Do constrained optimization","title":"Do constrained optimization","text":"R. Bergmann and R. Herzog. Intrinsic formulation of KKT conditions and constraint qualifications on smooth manifolds. SIAM Journal on Optimization 29, 2423–2444 (2019), arXiv:1804.06214.\n\n\n\nC. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.\n\n\n\n","category":"page"},{"location":"helpers/exports/#sec-exports","page":"Exports","title":"Exports","text":"","category":"section"},{"location":"helpers/exports/","page":"Exports","title":"Exports","text":"Exports aim to provide a consistent generation of images of your results. For example if you record the trace your algorithm walks on the Sphere, you can easily export this trace to a rendered image using asymptote_export_S2_signals and render the result with Asymptote. Despite these, you can always record values during your iterations, and export these, for example to csv.","category":"page"},{"location":"helpers/exports/#Asymptote","page":"Exports","title":"Asymptote","text":"","category":"section"},{"location":"helpers/exports/","page":"Exports","title":"Exports","text":"The following functions provide exports both in graphics and/or raw data using Asymptote.","category":"page"},{"location":"helpers/exports/","page":"Exports","title":"Exports","text":"Modules = [Manopt]\nPages = [\"Asymptote.jl\"]","category":"page"},{"location":"helpers/exports/#Manopt.asymptote_export_S2_data-Tuple{String}","page":"Exports","title":"Manopt.asymptote_export_S2_data","text":"asymptote_export_S2_data(filename)\n\nExport given data as an array of points on the 2-sphere, which might be one-, two- or three-dimensional data with points on the Sphere mathbb S^2.\n\nInput\n\nfilename a file to store the Asymptote code in.\n\nOptional arguments for the data\n\ndata a point representing the 1D,2D, or 3D array of points\nelevation_color_scheme A ColorScheme for elevation\nscale_axes=(1/3,1/3,1/3): move spheres closer to each other by a factor per direction\n\nOptional arguments for asymptote\n\narrow_head_size=1.8: size of the arrowheads of the vectors (in mm)\ncamera_position position of the camera scene (default: atop the center of the data in the xy-plane)\ntarget position the camera points at (default: center of xy-plane within data).\n\n\n\n\n\n","category":"method"},{"location":"helpers/exports/#Manopt.asymptote_export_S2_signals-Tuple{String}","page":"Exports","title":"Manopt.asymptote_export_S2_signals","text":"asymptote_export_S2_signals(filename; points, curves, tangent_vectors, colors, kwargs...)\n\nExport given points, curves, and tangent_vectors on the sphere mathbb S^2 to Asymptote.\n\nInput\n\nfilename a file to store the Asymptote code in.\n\nKeywaord arguments for the data\n\ncolors=Dict{Symbol,Array{RGBA{Float64},1}}(): dictionary of color arrays, indexed by symbols :points, :curves and :tvector, where each entry has to provide as least as many colors as the length of the corresponding sets.\ncurves=Array{Array{Float64,1},1}(undef, 0): an Array of Arrays of points on the sphere, where each inner array is interpreted as a curve and is accompanied by an entry within colors.\npoints=Array{Array{Float64,1},1}(undef, 0): an Array of Arrays of points on the sphere where each inner array is interpreted as a set of points and is accompanied by an entry within colors.\ntangent_vectors=Array{Array{Tuple{Float64,Float64},1},1}(undef, 0): an Array of Arrays of tuples, where the first is a points, the second a tangent vector and each set of vectors is accompanied by an entry from within colors.\n\nKeyword arguments for asymptote\n\narrow_head_size=6.0: size of the arrowheads of the tangent vectors\narrow_head_sizes overrides the previous value to specify a value per tVector` set.\ncamera_position=(1., 1., 0.): position of the camera in the Asymptote scene\nline_width=1.0: size of the lines used to draw the curves.\nline_widths overrides the previous value to specify a value per curve and tVector` set.\ndot_size=1.0: size of the dots used to draw the points.\ndot_sizes overrides the previous value to specify a value per point set.\nsize=nothing: a tuple for the image size, otherwise a relative size 4cm is used.\nsphere_color=RGBA{Float64}(0.85, 0.85, 0.85, 0.6): color of the sphere the data is drawn on\nsphere_line_color=RGBA{Float64}(0.75, 0.75, 0.75, 0.6): color of the lines on the sphere\nsphere_line_width=0.5: line width of the lines on the sphere\ntarget=(0.,0.,0.): position the camera points at\n\n\n\n\n\n","category":"method"},{"location":"helpers/exports/#Manopt.asymptote_export_SPD-Tuple{String}","page":"Exports","title":"Manopt.asymptote_export_SPD","text":"asymptote_export_SPD(filename)\n\nexport given data as a point on a Power(SymmetricPOsitiveDefinnite(3))} manifold of one-, two- or three-dimensional data with points on the manifold of symmetric positive definite matrices.\n\nInput\n\nfilename a file to store the Asymptote code in.\n\nOptional arguments for the data\n\ndata a point representing the 1D, 2D, or 3D array of SPD matrices\ncolor_scheme a ColorScheme for Geometric Anisotropy Index\nscale_axes=(1/3,1/3,1/3): move symmetric positive definite matrices closer to each other by a factor per direction compared to the distance estimated by the maximal eigenvalue of all involved SPD points\n\nOptional arguments for asymptote\n\ncamera_position position of the camera scene (default: atop the center of the data in the xy-plane)\ntarget position the camera points at (default: center of xy-plane within data).\n\nBoth values camera_position and target are scaled by scaledAxes*EW, where EW is the maximal eigenvalue in the data.\n\n\n\n\n\n","category":"method"},{"location":"helpers/exports/#Manopt.render_asymptote-Tuple{Any}","page":"Exports","title":"Manopt.render_asymptote","text":"render_asymptote(filename; render=4, format=\"png\", ...)\n\nrender an exported asymptote file specified in the filename, which can also be given as a relative or full path\n\nInput\n\nfilename filename of the exported asy and rendered image\n\nKeyword arguments\n\nthe default values are given in brackets\n\nrender=4: render level of asymptote passed to its -render option. This can be removed from the command by setting it to nothing.\nformat=\"png\": final rendered format passed to the -f option\nexport_file: (the filename with format as ending) specify the export filename\n\n\n\n\n\n","category":"method"},{"location":"plans/problem/#sec-problem","page":"Problem","title":"A Manopt problem","text":"","category":"section"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"CurrentModule = Manopt","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"A problem describes all static data of an optimisation task and has as a super type","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"AbstractManoptProblem\nget_objective\nget_manifold","category":"page"},{"location":"plans/problem/#Manopt.AbstractManoptProblem","page":"Problem","title":"Manopt.AbstractManoptProblem","text":"AbstractManoptProblem{M<:AbstractManifold}\n\nDescribe a Riemannian optimization problem with all static (not-changing) properties.\n\nThe most prominent features that should always be stated here are\n\nthe AbstractManifold mathcal M\nthe cost function f mathcal M ℝ\n\nUsually the cost should be within an AbstractManifoldObjective.\n\n\n\n\n\n","category":"type"},{"location":"plans/problem/#Manopt.get_objective","page":"Problem","title":"Manopt.get_objective","text":"get_objective(o::AbstractManifoldObjective, recursive=true)\n\nreturn the (one step) undecorated AbstractManifoldObjective of the (possibly) decorated o. As long as your decorated objective stores the objective within o.objective and the dispatch_objective_decorator is set to Val{true}, the internal state are extracted automatically.\n\nBy default the objective that is stored within a decorated objective is assumed to be at o.objective. Overwrite _get_objective(o, ::Val{true}, recursive) to change this behaviour for your objectiveo` for both the recursive and the direct case.\n\nIf recursive is set to false, only the most outer decorator is taken away instead of all.\n\n\n\n\n\nget_objective(mp::AbstractManoptProblem, recursive=false)\n\nreturn the objective AbstractManifoldObjective stored within an AbstractManoptProblem. If recursive is set to true, it additionally unwraps all decorators of the objective\n\n\n\n\n\nget_objective(amso::AbstractManifoldSubObjective)\n\nReturn the (original) objective stored the sub objective is build on.\n\n\n\n\n\n","category":"function"},{"location":"plans/problem/#Manopt.get_manifold","page":"Problem","title":"Manopt.get_manifold","text":"get_manifold(amp::AbstractManoptProblem)\n\nreturn the manifold stored within an AbstractManoptProblem\n\n\n\n\n\n","category":"function"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"Usually, such a problem is determined by the manifold or domain of the optimisation and the objective with all its properties used within an algorithm, see The Objective. For that one can just use","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"DefaultManoptProblem","category":"page"},{"location":"plans/problem/#Manopt.DefaultManoptProblem","page":"Problem","title":"Manopt.DefaultManoptProblem","text":"DefaultManoptProblem{TM <: AbstractManifold, Objective <: AbstractManifoldObjective}\n\nModel a default manifold problem, that (just) consists of the domain of optimisation, that is an AbstractManifold and an AbstractManifoldObjective\n\n\n\n\n\n","category":"type"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"For the constraint optimisation, there are different possibilities to represent the gradients of the constraints. This can be done with a","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"ConstraintProblem","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"The primal dual-based solvers (Chambolle-Pock and the PD Semi-smooth Newton), both need two manifolds as their domains, hence there also exists a","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"TwoManifoldProblem","category":"page"},{"location":"plans/problem/#Manopt.TwoManifoldProblem","page":"Problem","title":"Manopt.TwoManifoldProblem","text":"TwoManifoldProblem{\n MT<:AbstractManifold,NT<:AbstractManifold,O<:AbstractManifoldObjective\n} <: AbstractManoptProblem{MT}\n\nAn abstract type for primal-dual-based problems.\n\n\n\n\n\n","category":"type"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"From the two ingredients here, you can find more information about","category":"page"},{"location":"plans/problem/","page":"Problem","title":"Problem","text":"the ManifoldsBase.AbstractManifold in ManifoldsBase.jl\nthe AbstractManifoldObjective on the page about the objective.","category":"page"},{"location":"solvers/quasi_Newton/#Riemannian-quasi-Newton-methods","page":"Quasi-Newton","title":"Riemannian quasi-Newton methods","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":" CurrentModule = Manopt","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":" quasi_Newton\n quasi_Newton!","category":"page"},{"location":"solvers/quasi_Newton/#Manopt.quasi_Newton","page":"Quasi-Newton","title":"Manopt.quasi_Newton","text":"quasi_Newton(M, f, grad_f, p; kwargs...)\nquasi_Newton!(M, f, grad_f, p; kwargs...)\n\nPerform a quasi Newton iteration to solve\n\noperatornameargmin_p mathcal M f(p)\n\nwith start point p. The iterations can be done in-place of p=p^(0). The kth iteration consists of\n\nCompute the search direction η^(k) = -mathcal B_k operatornamegradf (p^(k)) or solve mathcal H_k η^(k) = -operatornamegradf (p^(k)).\nDetermine a suitable stepsize α_k along the curve γ(α) = R_p^(k)(α η^(k)), usually by using WolfePowellLinesearch.\nCompute p^(k+1) = R_p^(k)(α_k η^(k)).\nDefine s_k = mathcal T_p^(k) α_k η^(k)(α_k η^(k)) and y_k = operatornamegradf(p^(k+1)) - mathcal T_p^(k) α_k η^(k)(operatornamegradf(p^(k))), where mathcal T denotes a vector transport.\nCompute the new approximate Hessian H_k+1 or its inverse B_k+1.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nbasis=DefaultOrthonormalBasis(): basis to use within each of the the tangent spaces to represent the Hessian (inverse) for the cases where it is stored in full (matrix) form.\ncautious_update=false: whether or not to use the QuasiNewtonCautiousDirectionUpdate which wraps the direction_upate.\ncautious_function=(x) -> x * 1e-4: a monotone increasing function for the cautious update that is zero at x=0 and strictly increasing at 0\ndirection_update=InverseBFGS(): the AbstractQuasiNewtonUpdateRule to use.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.\ninitial_operator= initial_scale*Matrix{Float64}(I, n, n): initial matrix to use in case the Hessian (inverse) approximation is stored as a full matrix, that is n=manifold_dimension(M). This matrix is only allocated for the full matrix case. See also initial_scale.\ninitial_scale=1.0: scale initial s to use in with fracss_ky_k_p_klVert y_krVert_p_k in the computation of the limited memory approach. see also initial_operator\nmemory_size=20: limited memory, number of s_k y_k to store. Set to a negative value to use a full memory (matrix) representation\nnondescent_direction_behavior=:reinitialize_direction_update: specify how non-descent direction is handled. This can be\n:step_towards_negative_gradient: the direction is replaced with negative gradient, a message is stored.\n:ignore: the verification is not performed, so any computed direction is accepted. No message is stored.\n:reinitialize_direction_update: discards operator state stored in direction update rules.\nany other value performs the verification, keeps the direction but stores a message.\nA stored message can be displayed using DebugMessages.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=WolfePowellLinesearch(retraction_method, vector_transport_method): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(max(1000, memory_size))|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/quasi_Newton/#Manopt.quasi_Newton!","page":"Quasi-Newton","title":"Manopt.quasi_Newton!","text":"quasi_Newton(M, f, grad_f, p; kwargs...)\nquasi_Newton!(M, f, grad_f, p; kwargs...)\n\nPerform a quasi Newton iteration to solve\n\noperatornameargmin_p mathcal M f(p)\n\nwith start point p. The iterations can be done in-place of p=p^(0). The kth iteration consists of\n\nCompute the search direction η^(k) = -mathcal B_k operatornamegradf (p^(k)) or solve mathcal H_k η^(k) = -operatornamegradf (p^(k)).\nDetermine a suitable stepsize α_k along the curve γ(α) = R_p^(k)(α η^(k)), usually by using WolfePowellLinesearch.\nCompute p^(k+1) = R_p^(k)(α_k η^(k)).\nDefine s_k = mathcal T_p^(k) α_k η^(k)(α_k η^(k)) and y_k = operatornamegradf(p^(k+1)) - mathcal T_p^(k) α_k η^(k)(operatornamegradf(p^(k))), where mathcal T denotes a vector transport.\nCompute the new approximate Hessian H_k+1 or its inverse B_k+1.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\ngrad_f: the (Riemannian) gradient operatornamegradf: \\mathcal M → T_{p}\\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place\np: a point on the manifold mathcal M\n\nKeyword arguments\n\nbasis=DefaultOrthonormalBasis(): basis to use within each of the the tangent spaces to represent the Hessian (inverse) for the cases where it is stored in full (matrix) form.\ncautious_update=false: whether or not to use the QuasiNewtonCautiousDirectionUpdate which wraps the direction_upate.\ncautious_function=(x) -> x * 1e-4: a monotone increasing function for the cautious update that is zero at x=0 and strictly increasing at 0\ndirection_update=InverseBFGS(): the AbstractQuasiNewtonUpdateRule to use.\nevaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.\ninitial_operator= initial_scale*Matrix{Float64}(I, n, n): initial matrix to use in case the Hessian (inverse) approximation is stored as a full matrix, that is n=manifold_dimension(M). This matrix is only allocated for the full matrix case. See also initial_scale.\ninitial_scale=1.0: scale initial s to use in with fracss_ky_k_p_klVert y_krVert_p_k in the computation of the limited memory approach. see also initial_operator\nmemory_size=20: limited memory, number of s_k y_k to store. Set to a negative value to use a full memory (matrix) representation\nnondescent_direction_behavior=:reinitialize_direction_update: specify how non-descent direction is handled. This can be\n:step_towards_negative_gradient: the direction is replaced with negative gradient, a message is stored.\n:ignore: the verification is not performed, so any computed direction is accepted. No message is stored.\n:reinitialize_direction_update: discards operator state stored in direction update rules.\nany other value performs the verification, keeps the direction but stores a message.\nA stored message can be displayed using DebugMessages.\nproject!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=WolfePowellLinesearch(retraction_method, vector_transport_method): a functor inheriting from Stepsize to determine a step size\nstopping_criterion=StopAfterIteration(max(1000, memory_size))|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/quasi_Newton/#Background","page":"Quasi-Newton","title":"Background","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"The aim is to minimize a real-valued function on a Riemannian manifold, that is","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"min f(x) quad x mathcalM","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"Riemannian quasi-Newtonian methods are as generalizations of their Euclidean counterparts Riemannian line search methods. These methods determine a search direction η_k T_x_k mathcalM at the current iterate x_k and a suitable stepsize α_k along gamma(α) = R_x_k(α η_k), where R T mathcalM mathcalM is a retraction. The next iterate is obtained by","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"x_k+1 = R_x_k(α_k η_k)","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"In quasi-Newton methods, the search direction is given by","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"η_k = -mathcalH_k^-1operatornamegradf (x_k) = -mathcalB_k operatornamegrad (x_k)","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"where mathcalH_k T_x_k mathcalM T_x_k mathcalM is a positive definite self-adjoint operator, which approximates the action of the Hessian operatornameHess f (x_k) and mathcalB_k = mathcalH_k^-1. The idea of quasi-Newton methods is instead of creating a complete new approximation of the Hessian operator operatornameHess f(x_k+1) or its inverse at every iteration, the previous operator mathcalH_k or mathcalB_k is updated by a convenient formula using the obtained information about the curvature of the objective function during the iteration. The resulting operator mathcalH_k+1 or mathcalB_k+1 acts on the tangent space T_x_k+1 mathcalM of the freshly computed iterate x_k+1. In order to get a well-defined method, the following requirements are placed on the new operator mathcalH_k+1 or mathcalB_k+1 that is created by an update. Since the Hessian operatornameHess f(x_k+1) is a self-adjoint operator on the tangent space T_x_k+1 mathcalM, and mathcalH_k+1 approximates it, one requirement is, that mathcalH_k+1 or mathcalB_k+1 is also self-adjoint on T_x_k+1 mathcalM. In order to achieve a steady descent, the next requirement is that η_k is a descent direction in each iteration. Hence a further requirement is that mathcalH_k+1 or mathcalB_k+1 is a positive definite operator on T_x_k+1 mathcalM. In order to get information about the curvature of the objective function into the new operator mathcalH_k+1 or mathcalB_k+1, the last requirement is a form of a Riemannian quasi-Newton equation:","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"mathcalH_k+1 T_x_k rightarrow x_k+1(R_x_k^-1(x_k+1)) = operatornamegrad(x_k+1) - T_x_k rightarrow x_k+1(operatornamegradf(x_k))","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"or","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"mathcalB_k+1 operatornamegradf(x_k+1) - T_x_k rightarrow x_k+1(operatornamegradf(x_k)) = T_x_k rightarrow x_k+1(R_x_k^-1(x_k+1))","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"where T_x_k rightarrow x_k+1 T_x_k mathcalM T_x_k+1 mathcalM and the chosen retraction R is the associated retraction of T. Note that, of course, not all updates in all situations meet these conditions in every iteration. For specific quasi-Newton updates, the fulfilment of the Riemannian curvature condition, which requires that","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"g_x_k+1(s_k y_k) 0","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"holds, is a requirement for the inheritance of the self-adjointness and positive definiteness of the mathcalH_k or mathcalB_k to the operator mathcalH_k+1 or mathcalB_k+1. Unfortunately, the fulfilment of the Riemannian curvature condition is not given by a step size alpha_k 0 that satisfies the generalized Wolfe conditions. However, to create a positive definite operator mathcalH_k+1 or mathcalB_k+1 in each iteration, the so-called locking condition was introduced in [HGA15], which requires that the isometric vector transport T^S, which is used in the update formula, and its associate retraction R fulfil","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"T^Sx ξ_x(ξ_x) = β T^Rx ξ_x(ξ_x) quad β = fraclVert ξ_x rVert_xlVert T^Rx ξ_x(ξ_x) rVert_R_x(ξ_x)","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"where T^R is the vector transport by differentiated retraction. With the requirement that the isometric vector transport T^S and its associated retraction R satisfies the locking condition and using the tangent vector","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"y_k = β_k^-1 operatornamegradf(x_k+1) - T^Sx_k α_k η_k(operatornamegradf(x_k))","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"where","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"β_k = fraclVert α_k η_k rVert_x_klVert T^Rx_k α_k η_k(α_k η_k) rVert_x_k+1","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"in the update, it can be shown that choosing a stepsize α_k 0 that satisfies the Riemannian Wolfe conditions leads to the fulfilment of the Riemannian curvature condition, which in turn implies that the operator generated by the updates is positive definite. In the following the specific operators are denoted in matrix notation and hence use H_k and B_k, respectively.","category":"page"},{"location":"solvers/quasi_Newton/#Direction-updates","page":"Quasi-Newton","title":"Direction updates","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"In general there are different ways to compute a fixed AbstractQuasiNewtonUpdateRule. In general these are represented by","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"AbstractQuasiNewtonDirectionUpdate\nQuasiNewtonMatrixDirectionUpdate\nQuasiNewtonLimitedMemoryDirectionUpdate\nQuasiNewtonCautiousDirectionUpdate\nManopt.initialize_update!","category":"page"},{"location":"solvers/quasi_Newton/#Manopt.AbstractQuasiNewtonDirectionUpdate","page":"Quasi-Newton","title":"Manopt.AbstractQuasiNewtonDirectionUpdate","text":"AbstractQuasiNewtonDirectionUpdate\n\nAn abstract representation of an Quasi Newton Update rule to determine the next direction given current QuasiNewtonState.\n\nAll subtypes should be functors, they should be callable as H(M,x,d) to compute a new direction update.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.QuasiNewtonMatrixDirectionUpdate","page":"Quasi-Newton","title":"Manopt.QuasiNewtonMatrixDirectionUpdate","text":"QuasiNewtonMatrixDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate\n\nThe QuasiNewtonMatrixDirectionUpdate represent a quasi-Newton update rule, where the operator is stored as a matrix. A distinction is made between the update of the approximation of the Hessian, H_k mapsto H_k+1, and the update of the approximation of the Hessian inverse, B_k mapsto B_k+1. For the first case, the coordinates of the search direction η_k with respect to a basis b_i_i=1^n are determined by solving a linear system of equations\n\ntextSolve quad hatη_k = - H_k widehatoperatornamegradf(x_k)\n\nwhere H_k is the matrix representing the operator with respect to the basis b_i_i=1^n and widehatoperatornamegrad f(p_k) represents the coordinates of the gradient of the objective function f in x_k with respect to the basis b_i_i=1^n. If a method is chosen where Hessian inverse is approximated, the coordinates of the search direction η_k with respect to a basis b_i_i=1^n are obtained simply by matrix-vector multiplication\n\nhatη_k = - B_k widehatoperatornamegradf(x_k)\n\nwhere B_k is the matrix representing the operator with respect to the basis b_i_i=1^n and \\widehat{\\operatorname{grad}} f(p_k)}. In the end, the search directionη_kis generated from the coordinates\\hat{eta_k}and the vectors of the basis\\{b_i\\}_{i=1}^{n}in both variants. The [AbstractQuasiNewtonUpdateRule](@ref) indicates which quasi-Newton update rule is used. In all of them, the Euclidean update formula is used to generate the matrixH_{k+1}andB_{k+1}, and the basis\\{b_i\\}_{i=1}^{n}is transported into the upcoming tangent spaceT_{p_{k+1}} \\mathcal M`, preferably with an isometric vector transport, or generated there.\n\nProvided functors\n\n(mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction\n(η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η\n\nFields\n\nbasis: an AbstractBasis to use in the tangent spaces\nmatrix: the matrix which represents the approximating operator.\ninitial_scale: when initialising the update, a unit matrix is used as initial approximation, scaled by this factor\nupdate: a AbstractQuasiNewtonUpdateRule.\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\n\nConstructor\n\nQuasiNewtonMatrixDirectionUpdate(\n M::AbstractManifold,\n update,\n basis::B=DefaultOrthonormalBasis(),\n m=Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M));\n kwargs...\n)\n\nKeyword arguments\n\ninitial_scale=1.0\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\n\nGenerate the Update rule with defaults from a manifold and the names corresponding to the fields.\n\nSee also\n\nQuasiNewtonLimitedMemoryDirectionUpdate, QuasiNewtonCautiousDirectionUpdate, AbstractQuasiNewtonDirectionUpdate,\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.QuasiNewtonLimitedMemoryDirectionUpdate","page":"Quasi-Newton","title":"Manopt.QuasiNewtonLimitedMemoryDirectionUpdate","text":"QuasiNewtonLimitedMemoryDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate\n\nThis AbstractQuasiNewtonDirectionUpdate represents the limited-memory Riemannian BFGS update, where the approximating operator is represented by m stored pairs of tangent vectors widehats_i_i=k-m^k-1 and widehaty_i_i=k-m^k-1 in thek-th iteration For the calculation of the search directionXk the generalisation of the two-loop recursion is used (see HuangGallivanAbsil2015(cite)) since it only requires inner products and linear combinations of tangent vectors inT{pk}\\mathcal M For that the stored pairs of tangent vectors\\widehat{s}i, \\widehat{y}i the gradient\\operatorname{grad} f(pk)of the objective functionfinp_k`` and the positive definite self-adjoint operator\n\nmathcalB^(0)_k = fracg_p_k(s_k-1 y_k-1)g_p_k(y_k-1 y_k-1) mathrmid_T_p_k mathcalM\n\nare used. The two-loop recursion can be understood as that the InverseBFGS update is executed m times in a row on mathcal B^(0)_k using the tangent vectors widehats_iwidehaty_i, and in the same time the resulting operator mathcal B^LRBFGS_k is directly applied on operatornamegradf(x_k). When updating there are two cases: if there is still free memory, k m, the previously stored vector pairs widehats_iwidehaty_i have to be transported into the upcoming tangent space T_p_k+1mathcal M. If there is no free memory, the oldest pair widehats_iwidehaty_i has to be discarded and then all the remaining vector pairs widehats_iwidehaty_i are transported into the tangent space T_p_k+1mathcal M. After that the new values s_k = widehats_k = T^S_x_k α_k η_k(α_k η_k) and y_k = widehaty_k are stored at the beginning. This process ensures that new information about the objective function is always included and the old, probably no longer relevant, information is discarded.\n\nProvided functors\n\n(mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction\n(η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η\n\nFields\n\nmemory_s; the set of the stored (and transported) search directions times step size widehats_i_i=k-m^k-1.\nmemory_y: set of the stored gradient differences widehaty_i_i=k-m^k-1.\nξ: a variable used in the two-loop recursion.\nρ; a variable used in the two-loop recursion.\ninitial_scale: initial scaling of the Hessian\nvector_transport_method::AbstractVectorTransportMethodP: a vector transport mathcal T_ to use, see the section on vector transports\nmessage: a string containing a potential warning that might have appeared\nproject!: a function to stabilize the update by projecting on the tangent space\n\nConstructor\n\nQuasiNewtonLimitedMemoryDirectionUpdate(\n M::AbstractManifold,\n x,\n update::AbstractQuasiNewtonUpdateRule,\n memory_size;\n initial_vector=zero_vector(M,x),\n initial_scale::Real=1.0\n project!=copyto!\n)\n\nSee also\n\nInverseBFGS QuasiNewtonCautiousDirectionUpdate AbstractQuasiNewtonDirectionUpdate\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.QuasiNewtonCautiousDirectionUpdate","page":"Quasi-Newton","title":"Manopt.QuasiNewtonCautiousDirectionUpdate","text":"QuasiNewtonCautiousDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate\n\nThese AbstractQuasiNewtonDirectionUpdates represent any quasi-Newton update rule, which are based on the idea of a so-called cautious update. The search direction is calculated as given in QuasiNewtonMatrixDirectionUpdate or QuasiNewtonLimitedMemoryDirectionUpdate, butut the update then is only executed if\n\nfracg_x_k+1(y_ks_k)lVert s_k rVert^2_x_k+1 θ(lVert operatornamegradf(x_k) rVert_x_k)\n\nis satisfied, where θ is a monotone increasing function satisfying θ(0) = 0 and θ is strictly increasing at 0. If this is not the case, the corresponding update is skipped, which means that for QuasiNewtonMatrixDirectionUpdate the matrix H_k or B_k is not updated. The basis b_i^n_i=1 is nevertheless transported into the upcoming tangent space T_x_k+1 mathcalM, and for QuasiNewtonLimitedMemoryDirectionUpdate neither the oldest vector pair widetildes_km widetildey_km is discarded nor the newest vector pair widetildes_k widetildey_k is added into storage, but all stored vector pairs widetildes_i widetildey_i_i=k-m^k-1 are transported into the tangent space T_x_k+1 mathcalM. If InverseBFGS or InverseBFGS is chosen as update, then the resulting method follows the method of [HAG18], taking into account that the corresponding step size is chosen.\n\nProvided functors\n\n(mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction\n(η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η\n\nFields\n\nupdate: an AbstractQuasiNewtonDirectionUpdate\nθ: a monotone increasing function satisfying θ(0) = 0 and θ is strictly increasing at 0.\n\nConstructor\n\nQuasiNewtonCautiousDirectionUpdate(U::QuasiNewtonMatrixDirectionUpdate; θ = identity)\nQuasiNewtonCautiousDirectionUpdate(U::QuasiNewtonLimitedMemoryDirectionUpdate; θ = identity)\n\nGenerate a cautious update for either a matrix based or a limited memory based update rule.\n\nSee also\n\nQuasiNewtonMatrixDirectionUpdate QuasiNewtonLimitedMemoryDirectionUpdate\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.initialize_update!","page":"Quasi-Newton","title":"Manopt.initialize_update!","text":"initialize_update!(s::AbstractQuasiNewtonDirectionUpdate)\n\nInitialize direction update. By default no change is made.\n\n\n\n\n\ninitialize_update!(d::QuasiNewtonLimitedMemoryDirectionUpdate)\n\nInitialize the limited memory direction update by emptying the memory buffers.\n\n\n\n\n\n","category":"function"},{"location":"solvers/quasi_Newton/#Hessian-update-rules","page":"Quasi-Newton","title":"Hessian update rules","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"Using","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"update_hessian!","category":"page"},{"location":"solvers/quasi_Newton/#Manopt.update_hessian!","page":"Quasi-Newton","title":"Manopt.update_hessian!","text":"update_hessian!(d::AbstractQuasiNewtonDirectionUpdate, amp, st, p_old, k)\n\nupdate the Hessian within the QuasiNewtonState st given a AbstractManoptProblem amp as well as the an AbstractQuasiNewtonDirectionUpdate d and the last iterate p_old. Note that the current (kth) iterate is already stored in get_iterate(st).\n\nSee also AbstractQuasiNewtonUpdateRule and its subtypes for the different rules that are available within d.\n\n\n\n\n\n","category":"function"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"the following update formulae for either H_k+1 or B_k+1 are available.","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"AbstractQuasiNewtonUpdateRule\nBFGS\nDFP\nBroyden\nSR1\nInverseBFGS\nInverseDFP\nInverseBroyden\nInverseSR1","category":"page"},{"location":"solvers/quasi_Newton/#Manopt.AbstractQuasiNewtonUpdateRule","page":"Quasi-Newton","title":"Manopt.AbstractQuasiNewtonUpdateRule","text":"AbstractQuasiNewtonUpdateRule\n\nSpecify a type for the different AbstractQuasiNewtonDirectionUpdates, that is for a QuasiNewtonMatrixDirectionUpdate there are several different updates to the matrix, while the default for QuasiNewtonLimitedMemoryDirectionUpdate the most prominent is InverseBFGS.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.BFGS","page":"Quasi-Newton","title":"Manopt.BFGS","text":"BFGS <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian BFGS update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeH_k^mathrmBFGS the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nH^mathrmBFGS_k+1 = widetildeH^mathrmBFGS_k + fracy_k y^mathrmT_k s^mathrmT_k y_k - fracwidetildeH^mathrmBFGS_k s_k s^mathrmT_k widetildeH^mathrmBFGS_k s^mathrmT_k widetildeH^mathrmBFGS_k s_k\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.DFP","page":"Quasi-Newton","title":"Manopt.DFP","text":"DFP <: AbstractQuasiNewtonUpdateRule\n\nindicates in an AbstractQuasiNewtonDirectionUpdate that the Riemannian DFP update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeH_k^mathrmDFP the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nH^mathrmDFP_k+1 = Bigl(\n mathrmid_T_x_k+1 mathcalM - fracy_k s^mathrmT_ks^mathrmT_k y_k\nBigr)\nwidetildeH^mathrmDFP_k\nBigl(\n mathrmid_T_x_k+1 mathcalM - fracs_k y^mathrmT_ks^mathrmT_k y_k\nBigr) + fracy_k y^mathrmT_ks^mathrmT_k y_k\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.Broyden","page":"Quasi-Newton","title":"Manopt.Broyden","text":"Broyden <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of BFGS and DFP.\n\nDenote by widetildeH_k^mathrmBr the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nH^mathrmBr_k+1 = widetildeH^mathrmBr_k\n - fracwidetildeH^mathrmBr_k s_k s^mathrmT_k widetildeH^mathrmBr_ks^mathrmT_k widetildeH^mathrmBr_k s_k + fracy_k y^mathrmT_ks^mathrmT_k y_k\n + φ_k s^mathrmT_k widetildeH^mathrmBr_k s_k\n Bigl(\n fracy_ks^mathrmT_k y_k - fracwidetildeH^mathrmBr_k s_ks^mathrmT_k widetildeH^mathrmBr_k s_k\n Bigr)\n Bigl(\n fracy_ks^mathrmT_k y_k - fracwidetildeH^mathrmBr_k s_ks^mathrmT_k widetildeH^mathrmBr_k s_k\n Bigr)^mathrmT\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively, and φ_k is the Broyden factor which is :constant by default but can also be set to :Davidon.\n\nConstructor\n\nBroyden(φ, update_rule::Symbol = :constant)\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.SR1","page":"Quasi-Newton","title":"Manopt.SR1","text":"SR1 <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian SR1 update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeH_k^mathrmSR1 the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nH^mathrmSR1_k+1 = widetildeH^mathrmSR1_k\n+ frac\n (y_k - widetildeH^mathrmSR1_k s_k) (y_k - widetildeH^mathrmSR1_k s_k)^mathrmT\n\n(y_k - widetildeH^mathrmSR1_k s_k)^mathrmT s_k\n\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\nThis method can be stabilized by only performing the update if denominator is larger than rlVert s_krVert_x_k+1lVert y_k - widetildeH^mathrmSR1_k s_k rVert_x_k+1 for some r0. For more details, see Section 6.2 in [NW06].\n\nConstructor\n\nSR1(r::Float64=-1.0)\n\nGenerate the SR1 update.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.InverseBFGS","page":"Quasi-Newton","title":"Manopt.InverseBFGS","text":"InverseBFGS <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian BFGS update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeB_k^mathrmBFGS the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nB^mathrmBFGS_k+1 = Bigl(\n mathrmid_T_x_k+1 mathcalM - fracs_k y^mathrmT_k s^mathrmT_k y_k\nBigr)\nwidetildeB^mathrmBFGS_k\nBigl(\n mathrmid_T_x_k+1 mathcalM - fracy_k s^mathrmT_k s^mathrmT_k y_k\nBigr) + fracs_k s^mathrmT_ks^mathrmT_k y_k\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.InverseDFP","page":"Quasi-Newton","title":"Manopt.InverseDFP","text":"InverseDFP <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian DFP update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeB_k^mathrmDFP the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nB^mathrmDFP_k+1 = widetildeB^mathrmDFP_k + fracs_k s^mathrmT_ks^mathrmT_k y_k\n - fracwidetildeB^mathrmDFP_k y_k y^mathrmT_k widetildeB^mathrmDFP_ky^mathrmT_k widetildeB^mathrmDFP_k y_k\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.InverseBroyden","page":"Quasi-Newton","title":"Manopt.InverseBroyden","text":"InverseBroyden <: AbstractQuasiNewtonUpdateRule\n\nIndicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of InverseBFGS and InverseDFP.\n\nDenote by widetildeH_k^mathrmBr the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nB^mathrmBr_k+1 = widetildeB^mathrmBr_k\n - fracwidetildeB^mathrmBr_k y_k y^mathrmT_k widetildeB^mathrmBr_ky^mathrmT_k widetildeB^mathrmBr_k y_k\n + fracs_k s^mathrmT_ks^mathrmT_k y_k\n + φ_k y^mathrmT_k widetildeB^mathrmBr_k y_k\n Bigl(\n fracs_ks^mathrmT_k y_k - fracwidetildeB^mathrmBr_k y_ky^mathrmT_k widetildeB^mathrmBr_k y_k\n Bigr) Bigl(\n fracs_ks^mathrmT_k y_k - fracwidetildeB^mathrmBr_k y_ky^mathrmT_k widetildeB^mathrmBr_k y_k\n Bigr)^mathrmT\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively, and φ_k is the Broyden factor which is :constant by default but can also be set to :Davidon.\n\nConstructor\n\nInverseBroyden(φ, update_rule::Symbol = :constant)\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#Manopt.InverseSR1","page":"Quasi-Newton","title":"Manopt.InverseSR1","text":"InverseSR1 <: AbstractQuasiNewtonUpdateRule\n\nindicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian SR1 update is used in the Riemannian quasi-Newton method.\n\nDenote by widetildeB_k^mathrmSR1 the operator concatenated with a vector transport and its inverse before and after to act on x_k+1 = R_x_k(α_k η_k). Then the update formula reads\n\nB^mathrmSR1_k+1 = widetildeB^mathrmSR1_k\n+ frac\n (s_k - widetildeB^mathrmSR1_k y_k) (s_k - widetildeB^mathrmSR1_k y_k)^mathrmT\n\n (s_k - widetildeB^mathrmSR1_k y_k)^mathrmT y_k\n\n\nwhere s_k and y_k are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of\n\nT^S_x_k α_k η_k(α_k η_k) quadtextandquad\noperatornamegradf(x_k+1) - T^S_x_k α_k η_k(operatornamegradf(x_k)) T_x_k+1 mathcalM\n\nrespectively.\n\nThis method can be stabilized by only performing the update if denominator is larger than rlVert y_krVert_x_k+1lVert s_k - widetildeH^mathrmSR1_k y_k rVert_x_k+1 for some r0. For more details, see Section 6.2 in [NW06].\n\nConstructor\n\nInverseSR1(r::Float64=-1.0)\n\nGenerate the InverseSR1.\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#State","page":"Quasi-Newton","title":"State","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"The quasi Newton algorithm is based on a DefaultManoptProblem.","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"QuasiNewtonState","category":"page"},{"location":"solvers/quasi_Newton/#Manopt.QuasiNewtonState","page":"Quasi-Newton","title":"Manopt.QuasiNewtonState","text":"QuasiNewtonState <: AbstractManoptSolverState\n\nThe AbstractManoptSolverState represent any quasi-Newton based method and stores all necessary fields.\n\nFields\n\ndirection_update: an AbstractQuasiNewtonDirectionUpdate rule.\nη: the current update direction\nnondescent_direction_behavior: a Symbol to specify how to handle direction that are not descent ones.\nnondescent_direction_value: the value from the last inner product from checking for descent directions\np::P: a point on the manifold mathcal Mstoring the current iterate\np_old: the last iterate\nsk: the current step\nyk: the current gradient difference\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nstop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled\nX::T: a tangent vector at the point p on the manifold mathcal Mstoring the gradient at the current iterate\nX_old: the last gradient\n\nConstructor\n\nQuasiNewtonState(M::AbstractManifold, p; kwargs...)\n\nGenerate the Quasi Newton state on the manifold M with start point p.\n\nKeyword arguments\n\ndirection_update=QuasiNewtonLimitedMemoryDirectionUpdate(M, p, InverseBFGS(), 20; vector_transport_method=vector_transport_method)\nstopping_criterion=[StopAfterIteration9(@ref)(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\nstepsize=default_stepsize(M, QuasiNewtonState): a functor inheriting from Stepsize to determine a step size\nvector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport mathcal T_ to use, see the section on vector transports\nX=zero_vector(M, p): a tangent vector at the point p on the manifold mathcal Mto specify the representation of a tangent vector\n\nSee also\n\nquasi_Newton\n\n\n\n\n\n","category":"type"},{"location":"solvers/quasi_Newton/#sec-qn-technical-details","page":"Quasi-Newton","title":"Technical details","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"The quasi_Newton solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nA vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= or vector_transport_method_dual= (for mathcal N) does not have to be specified.\nBy default quasi Newton uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).\nthe norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.\nA copyto!(M, q, p) and copy(M,p) for points and similarly copy(M, p, X) for tangent vectors.\nBy default the tangent vector storing the gradient is initialized calling zero_vector(M,p).","category":"page"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"Most Hessian approximations further require get_coordinates(M, p, X, b) with respect to the AbstractBasis b provided, which is DefaultOrthonormalBasis by default from the basis= keyword.","category":"page"},{"location":"solvers/quasi_Newton/#Literature","page":"Quasi-Newton","title":"Literature","text":"","category":"section"},{"location":"solvers/quasi_Newton/","page":"Quasi-Newton","title":"Quasi-Newton","text":"W. Huang, P.-A. Absil and K. A. Gallivan. A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems. SIAM Journal on Optimization 28, 470–495 (2018).\n\n\n\nW. Huang, K. A. Gallivan and P.-A. Absil. A Broyden class of quasi-Newton methods for Riemannian optimization. SIAM Journal on Optimization 25, 1660–1685 (2015).\n\n\n\nJ. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).\n\n\n\n","category":"page"},{"location":"solvers/NelderMead/#sec-nelder-meadSolver","page":"Nelder–Mead","title":"Nelder Mead method","text":"","category":"section"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":"CurrentModule = Manopt","category":"page"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":" NelderMead\n NelderMead!","category":"page"},{"location":"solvers/NelderMead/#Manopt.NelderMead","page":"Nelder–Mead","title":"Manopt.NelderMead","text":"NelderMead(M::AbstractManifold, f, population=NelderMeadSimplex(M))\nNelderMead(M::AbstractManifold, mco::AbstractManifoldCostObjective, population=NelderMeadSimplex(M))\nNelderMead!(M::AbstractManifold, f, population)\nNelderMead!(M::AbstractManifold, mco::AbstractManifoldCostObjective, population)\n\nSolve a Nelder-Mead minimization problem for the cost function f mathcal M ℝ on the manifold M. If the initial NelderMeadSimplex is not provided, a random set of points is chosen. The compuation can be performed in-place of the population.\n\nThe algorithm consists of the following steps. Let d denote the dimension of the manifold mathcal M.\n\nOrder the simplex vertices p_i i=1d+1 by increasing cost, such that we have f(p_1) f(p_2) f(p_d+1).\nCompute the Riemannian center of mass [Kar77], cf. mean, p_textm of the simplex vertices p_1p_d+1.\nReflect the point with the worst point at the mean p_textr = operatornameretr_p_textmbigl( - αoperatornameretr^-1_p_textm (p_d+1) bigr) If f(p_1) f(p_textr) f(p_d) then set p_d+1 = p_textr and go to step 1.\nExpand the simplex if f(p_textr) f(p_1) by computing the expantion point p_texte = operatornameretr_p_textmbigl( - γαoperatornameretr^-1_p_textm (p_d+1) bigr), which in this formulation allows to reuse the tangent vector from the inverse retraction from before. If f(p_texte) f(p_textr) then set p_d+1 = p_texte otherwise set set p_d+1 = p_textr. Then go to Step 1.\nContract the simplex if f(p_textr) f(p_d).\nIf f(p_textr) f(p_d+1) set the step s = -ρ\notherwise set s=ρ.\nCompute the contraction point p_textc = operatornameretr_p_textmbigl(soperatornameretr^-1_p_textm p_d+1 bigr).\nin this case if f(p_textc) f(p_textr) set p_d+1 = p_textc and go to step 1\nin this case if f(p_textc) f(p_d+1) set p_d+1 = p_textc and go to step 1\nShrink all points (closer to p_1). For all i=2d+1 set p_i = operatornameretr_p_1bigl( σoperatornameretr^-1_p_1 p_i bigr)\n\nFor more details, see The Euclidean variant in the Wikipedia https://en.wikipedia.org/wiki/Nelder-Mead_method or Algorithm 4.1 in http://www.optimization-online.org/DB_FILE/2007/08/1742.pdf.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\npopulation::NelderMeadSimplex=NelderMeadSimplex(M): an initial simplex of d+1 points, where d is the manifold_dimension of M.\n\nKeyword arguments\n\nstopping_criterion=StopAfterIteration(2000)|StopWhenPopulationConcentrated()): a functor indicating that the stopping criterion is fulfilled a StoppingCriterion\nα=1.0: reflection parameter α 0:\nγ=2.0 expansion parameter γ:\nρ=1/2: contraction parameter, 0 ρ frac12,\nσ=1/2: shrink coefficient, 0 σ 1\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/NelderMead/#Manopt.NelderMead!","page":"Nelder–Mead","title":"Manopt.NelderMead!","text":"NelderMead(M::AbstractManifold, f, population=NelderMeadSimplex(M))\nNelderMead(M::AbstractManifold, mco::AbstractManifoldCostObjective, population=NelderMeadSimplex(M))\nNelderMead!(M::AbstractManifold, f, population)\nNelderMead!(M::AbstractManifold, mco::AbstractManifoldCostObjective, population)\n\nSolve a Nelder-Mead minimization problem for the cost function f mathcal M ℝ on the manifold M. If the initial NelderMeadSimplex is not provided, a random set of points is chosen. The compuation can be performed in-place of the population.\n\nThe algorithm consists of the following steps. Let d denote the dimension of the manifold mathcal M.\n\nOrder the simplex vertices p_i i=1d+1 by increasing cost, such that we have f(p_1) f(p_2) f(p_d+1).\nCompute the Riemannian center of mass [Kar77], cf. mean, p_textm of the simplex vertices p_1p_d+1.\nReflect the point with the worst point at the mean p_textr = operatornameretr_p_textmbigl( - αoperatornameretr^-1_p_textm (p_d+1) bigr) If f(p_1) f(p_textr) f(p_d) then set p_d+1 = p_textr and go to step 1.\nExpand the simplex if f(p_textr) f(p_1) by computing the expantion point p_texte = operatornameretr_p_textmbigl( - γαoperatornameretr^-1_p_textm (p_d+1) bigr), which in this formulation allows to reuse the tangent vector from the inverse retraction from before. If f(p_texte) f(p_textr) then set p_d+1 = p_texte otherwise set set p_d+1 = p_textr. Then go to Step 1.\nContract the simplex if f(p_textr) f(p_d).\nIf f(p_textr) f(p_d+1) set the step s = -ρ\notherwise set s=ρ.\nCompute the contraction point p_textc = operatornameretr_p_textmbigl(soperatornameretr^-1_p_textm p_d+1 bigr).\nin this case if f(p_textc) f(p_textr) set p_d+1 = p_textc and go to step 1\nin this case if f(p_textc) f(p_d+1) set p_d+1 = p_textc and go to step 1\nShrink all points (closer to p_1). For all i=2d+1 set p_i = operatornameretr_p_1bigl( σoperatornameretr^-1_p_1 p_i bigr)\n\nFor more details, see The Euclidean variant in the Wikipedia https://en.wikipedia.org/wiki/Nelder-Mead_method or Algorithm 4.1 in http://www.optimization-online.org/DB_FILE/2007/08/1742.pdf.\n\nInput\n\nM::AbstractManifold: a Riemannian manifold mathcal M\nf: a cost function f mathcal M ℝ implemented as (M, p) -> v\npopulation::NelderMeadSimplex=NelderMeadSimplex(M): an initial simplex of d+1 points, where d is the manifold_dimension of M.\n\nKeyword arguments\n\nstopping_criterion=StopAfterIteration(2000)|StopWhenPopulationConcentrated()): a functor indicating that the stopping criterion is fulfilled a StoppingCriterion\nα=1.0: reflection parameter α 0:\nγ=2.0 expansion parameter γ:\nρ=1/2: contraction parameter, 0 ρ frac12,\nσ=1/2: shrink coefficient, 0 σ 1\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\n\nAll other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.\n\nOutput\n\nThe obtained approximate minimizer p^*. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.\n\n\n\n\n\n","category":"function"},{"location":"solvers/NelderMead/#State","page":"Nelder–Mead","title":"State","text":"","category":"section"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":" NelderMeadState","category":"page"},{"location":"solvers/NelderMead/#Manopt.NelderMeadState","page":"Nelder–Mead","title":"Manopt.NelderMeadState","text":"NelderMeadState <: AbstractManoptSolverState\n\nDescribes all parameters and the state of a Nelder-Mead heuristic based optimization algorithm.\n\nFields\n\nThe naming of these parameters follows the Wikipedia article of the Euclidean case. The default is given in brackets, the required value range after the description\n\npopulation::NelderMeadSimplex: a population (set) of d+1 points x_i, i=1n+1, where d is the manifold_dimension of M.\nstepsize::Stepsize: a functor inheriting from Stepsize to determine a step size\nα: the reflection parameter α 0:\nγ the expansion parameter γ 0:\nρ: the contraction parameter, 0 ρ frac12,\nσ: the shrinkage coefficient, 0 σ 1\np::P: a point on the manifold mathcal M storing the current best point\ninverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method::AbstractRetractionMethod: a retraction operatornameretr to use, see the section on retractions\n\nConstructors\n\nNelderMeadState(M::AbstractManifold; kwargs...)\n\nConstruct a Nelder-Mead Option with a default population (if not provided) of set of dimension(M)+1 random points stored in NelderMeadSimplex.\n\nKeyword arguments\n\npopulation=NelderMeadSimplex(M)\nstopping_criterion=StopAfterIteration(2000)|StopWhenPopulationConcentrated()): a functor indicating that the stopping criterion is fulfilled a StoppingCriterion\nα=1.0: reflection parameter α 0:\nγ=2.0 expansion parameter γ:\nρ=1/2: contraction parameter, 0 ρ frac12,\nσ=1/2: shrink coefficient, 0 σ 1\ninverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction operatornameretr^-1 to use, see the section on retractions and their inverses\nretraction_method=default_retraction_method(M, typeof(p)): a retraction operatornameretr to use, see the section on retractions\np=copy(M, population.pts[1]): initialise the storage for the best point (iterate)¨\n\n\n\n\n\n","category":"type"},{"location":"solvers/NelderMead/#Simplex","page":"Nelder–Mead","title":"Simplex","text":"","category":"section"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":"NelderMeadSimplex","category":"page"},{"location":"solvers/NelderMead/#Manopt.NelderMeadSimplex","page":"Nelder–Mead","title":"Manopt.NelderMeadSimplex","text":"NelderMeadSimplex\n\nA simplex for the Nelder-Mead algorithm.\n\nConstructors\n\nNelderMeadSimplex(M::AbstractManifold)\n\nConstruct a simplex using d+1 random points from manifold M, where d is the manifold_dimension of M.\n\nNelderMeadSimplex(\n M::AbstractManifold,\n p,\n B::AbstractBasis=DefaultOrthonormalBasis();\n a::Real=0.025,\n retraction_method::AbstractRetractionMethod=default_retraction_method(M, typeof(p)),\n)\n\nConstruct a simplex from a basis B with one point being p and other points constructed by moving by a in each principal direction defined by basis B of the tangent space at point p using retraction retraction_method. This works similarly to how the initial simplex is constructed in the Euclidean Nelder-Mead algorithm, just in the tangent space at point p.\n\n\n\n\n\n","category":"type"},{"location":"solvers/NelderMead/#Additional-stopping-criteria","page":"Nelder–Mead","title":"Additional stopping criteria","text":"","category":"section"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":"StopWhenPopulationConcentrated","category":"page"},{"location":"solvers/NelderMead/#Manopt.StopWhenPopulationConcentrated","page":"Nelder–Mead","title":"Manopt.StopWhenPopulationConcentrated","text":"StopWhenPopulationConcentrated <: StoppingCriterion\n\nA stopping criterion for NelderMead to indicate to stop when both\n\nthe maximal distance of the first to the remaining the cost values and\nthe maximal distance of the first to the remaining the population points\n\ndrops below a certain tolerance tol_f and tol_p, respectively.\n\nConstructor\n\nStopWhenPopulationConcentrated(tol_f::Real=1e-8, tol_x::Real=1e-8)\n\n\n\n\n\n","category":"type"},{"location":"solvers/NelderMead/#Technical-details","page":"Nelder–Mead","title":"Technical details","text":"","category":"section"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":"The NelderMead solver requires the following functions of a manifold to be available","category":"page"},{"location":"solvers/NelderMead/","page":"Nelder–Mead","title":"Nelder–Mead","text":"A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.\nAn inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= does not have to be specified.\nThe distance(M, p, q) when using the default stopping criterion, which includes StopWhenPopulationConcentrated.\nWithin the default initialization rand(M) is used to generate the initial population\nA mean(M, population) has to be available, for example by loading Manifolds.jl and its statistics tools","category":"page"}] } diff --git a/dev/solvers/ChambollePock/index.html b/dev/solvers/ChambollePock/index.html index 1306457134..09f85e6873 100644 --- a/dev/solvers/ChambollePock/index.html +++ b/dev/solvers/ChambollePock/index.html @@ -1,12 +1,12 @@ Chambolle-Pock · Manopt.jl

The Riemannian Chambolle-Pock algorithm

The Riemannian Chambolle—Pock is a generalization of the Chambolle—Pock algorithm Chambolle and Pock [CP11] It is also known as primal-dual hybrid gradient (PDHG) or primal-dual proximal splitting (PDPS) algorithm.

In order to minimize over $p∈\mathcal M$ the cost function consisting of In order to minimize a cost function consisting of

\[F(p) + G(Λ(p)),\]

over $p∈\mathcal M$

where $F:\mathcal M → \overline{ℝ}$, $G:\mathcal N → \overline{ℝ}$, and $Λ:\mathcal M →\mathcal N$. If the manifolds $\mathcal M$ or $\mathcal N$ are not Hadamard, it has to be considered locally only, that is on geodesically convex sets $\mathcal C \subset \mathcal M$ and $\mathcal D \subset\mathcal N$ such that $Λ(\mathcal C) \subset \mathcal D$.

The algorithm is available in four variants: exact versus linearized (see variant) as well as with primal versus dual relaxation (see relax). For more details, see Bergmann, Herzog, Silva Louzeiro, Tenbrinck and Vidal-Núñez [BHS+21]. In the following description is the case of the exact, primal relaxed Riemannian Chambolle—Pock algorithm.

Given base points $m∈\mathcal C$, $n=Λ(m)∈\mathcal D$, initial primal and dual values $p^{(0)} ∈\mathcal C$, $ξ_n^{(0)} ∈T_n^*\mathcal N$, and primal and dual step sizes $\sigma_0$, $\tau_0$, relaxation $\theta_0$, as well as acceleration $\gamma$.

As an initialization, perform $\bar p^{(0)} \gets p^{(0)}$.

The algorithms performs the steps $k=1,…,$ (until a StoppingCriterion is fulfilled with)

  1. \[ξ^{(k+1)}_n = \operatorname{prox}_{\tau_k G_n^*}\Bigl(ξ_n^{(k)} + \tau_k \bigl(\log_n Λ (\bar p^{(k)})\bigr)^\flat\Bigr)\]

  2. \[p^{(k+1)} = \operatorname{prox}_{\sigma_k F}\biggl(\exp_{p^{(k)}}\Bigl( \operatorname{PT}_{p^{(k)}\gets m}\bigl(-\sigma_k DΛ(m)^*[ξ_n^{(k+1)}]\bigr)^\sharp\Bigr)\biggr)\]

  3. Update
    • $\theta_k = (1+2\gamma\sigma_k)^{-\frac{1}{2}}$
    • $\sigma_{k+1} = \sigma_k\theta_k$
    • $\tau_{k+1} = \frac{\tau_k}{\theta_k}$
  4. \[\bar p^{(k+1)} = \exp_{p^{(k+1)}}\bigl(-\theta_k \log_{p^{(k+1)}} p^{(k)}\bigr)\]

Furthermore you can exchange the exponential map, the logarithmic map, and the parallel transport by a retraction, an inverse retraction, and a vector transport.

Finally you can also update the base points $m$ and $n$ during the iterations. This introduces a few additional vector transports. The same holds for the case $Λ(m^{(k)})\neq n^{(k)}$ at some point. All these cases are covered in the algorithm.

Manopt.ChambollePockFunction
ChambollePock(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)
-ChambollePock!(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)

Perform the Riemannian Chambolle—Pock algorithm.

Given a cost function $\mathcal E:\mathcal M → ℝ$ of the form

\[\mathcal f(p) = F(p) + G( Λ(p) ),\]

where $F:\mathcal M → ℝ$, $G:\mathcal N → ℝ$, and $Λ:\mathcal M → \mathcal N$.

This can be done inplace of $p$.

Input parameters

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • N::AbstractManifold: a Riemannian manifold $\mathcal M$
  • p: a point on the manifold $\mathcal M$
  • X: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • m: a point on the manifold $\mathcal M$
  • n: a point on the manifold $\mathcal N$
  • adjoint_linearized_operator: the adjoint $DΛ^*$ of the linearized operator $DΛ: T_{m}\mathcal M → T_{Λ(m)}\mathcal N)$
  • prox_F, prox_G_Dual: the proximal maps of $F$ and $G^\ast_n$

note that depending on the AbstractEvaluationType evaluation the last three parameters as well as the forward operator Λ and the linearized_forward_operator can be given as allocating functions (Manifolds, parameters) -> result or as mutating functions (Manifold, result, parameters) -> result` to spare allocations.

By default, this performs the exact Riemannian Chambolle Pock algorithm, see the optional parameter for their linearized variant.

For more details on the algorithm, see [BHS+21].

Keyword Arguments

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.ChambollePock!Function
ChambollePock(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)
-ChambollePock!(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)

Perform the Riemannian Chambolle—Pock algorithm.

Given a cost function $\mathcal E:\mathcal M → ℝ$ of the form

\[\mathcal f(p) = F(p) + G( Λ(p) ),\]

where $F:\mathcal M → ℝ$, $G:\mathcal N → ℝ$, and $Λ:\mathcal M → \mathcal N$.

This can be done inplace of $p$.

Input parameters

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • N::AbstractManifold: a Riemannian manifold $\mathcal M$
  • p: a point on the manifold $\mathcal M$
  • X: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • m: a point on the manifold $\mathcal M$
  • n: a point on the manifold $\mathcal N$
  • adjoint_linearized_operator: the adjoint $DΛ^*$ of the linearized operator $DΛ: T_{m}\mathcal M → T_{Λ(m)}\mathcal N)$
  • prox_F, prox_G_Dual: the proximal maps of $F$ and $G^\ast_n$

note that depending on the AbstractEvaluationType evaluation the last three parameters as well as the forward operator Λ and the linearized_forward_operator can be given as allocating functions (Manifolds, parameters) -> result or as mutating functions (Manifold, result, parameters) -> result` to spare allocations.

By default, this performs the exact Riemannian Chambolle Pock algorithm, see the optional parameter for their linearized variant.

For more details on the algorithm, see [BHS+21].

Keyword Arguments

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ChambollePockStateType
ChambollePockState <: AbstractPrimalDualSolverState

stores all options and variables within a linearized or exact Chambolle Pock.

Fields

  • acceleration::R: acceleration factor
  • dual_stepsize::R: proximal parameter of the dual prox
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • inverse_retraction_method_dual::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • m::P: base point on $\mathcal M$
  • n::Q: base point on $\mathcal N$
  • p::P: an initial point on $p^{(0)} ∈ \mathcal M$
  • pbar::P: the relaxed iterate used in the next dual update step (when using :primal relaxation)
  • primal_stepsize::R: proximal parameter of the primal prox
  • X::T: an initial tangent vector $X^{(0)} ∈ T_{p^{(0)}}\mathcal M$
  • Xbar::T: the relaxed iterate used in the next primal update step (when using :dual relaxation)
  • relaxation::R: relaxation in the primal relaxation step (to compute pbar:
  • relax::Symbol: which variable to relax (:primalor:dual`:
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • variant: whether to perform an :exact or :linearized Chambolle-Pock
  • update_primal_base: function (pr, st, k) -> m to update the primal base
  • update_dual_base: function (pr, st, k) -> n to update the dual base
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • vector_transport_method_dual::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Here, P is a point type on $\mathcal M$, T its tangent vector type, Q a point type on $\mathcal N$, and R<:Real is a real number type

where for the last two the functions a AbstractManoptProblemp, AbstractManoptSolverStateo and the current iterate i are the arguments. If you activate these to be different from the default identity, you have to provide p.Λ for the algorithm to work (which might be missing in the linearized case).

Constructor

ChambollePockState(M::AbstractManifold, N::AbstractManifold;
+ChambollePock!(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)

Perform the Riemannian Chambolle—Pock algorithm.

Given a cost function $\mathcal E:\mathcal M → ℝ$ of the form

\[\mathcal f(p) = F(p) + G( Λ(p) ),\]

where $F:\mathcal M → ℝ$, $G:\mathcal N → ℝ$, and $Λ:\mathcal M → \mathcal N$.

This can be done inplace of $p$.

Input parameters

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • N::AbstractManifold: a Riemannian manifold $\mathcal M$
  • p: a point on the manifold $\mathcal M$
  • X: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • m: a point on the manifold $\mathcal M$
  • n: a point on the manifold $\mathcal N$
  • adjoint_linearized_operator: the adjoint $DΛ^*$ of the linearized operator $DΛ: T_{m}\mathcal M → T_{Λ(m)}\mathcal N)$
  • prox_F, prox_G_Dual: the proximal maps of $F$ and $G^\ast_n$

note that depending on the AbstractEvaluationType evaluation the last three parameters as well as the forward operator Λ and the linearized_forward_operator can be given as allocating functions (Manifolds, parameters) -> result or as mutating functions (Manifold, result, parameters) -> result` to spare allocations.

By default, this performs the exact Riemannian Chambolle Pock algorithm, see the optional parameter for their linearized variant.

For more details on the algorithm, see [BHS+21].

Keyword Arguments

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.ChambollePock!Function
ChambollePock(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)
+ChambollePock!(M, N, f, p, X, m, n, prox_G, prox_G_dual, adjoint_linear_operator; kwargs...)

Perform the Riemannian Chambolle—Pock algorithm.

Given a cost function $\mathcal E:\mathcal M → ℝ$ of the form

\[\mathcal f(p) = F(p) + G( Λ(p) ),\]

where $F:\mathcal M → ℝ$, $G:\mathcal N → ℝ$, and $Λ:\mathcal M → \mathcal N$.

This can be done inplace of $p$.

Input parameters

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • N::AbstractManifold: a Riemannian manifold $\mathcal M$
  • p: a point on the manifold $\mathcal M$
  • X: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • m: a point on the manifold $\mathcal M$
  • n: a point on the manifold $\mathcal N$
  • adjoint_linearized_operator: the adjoint $DΛ^*$ of the linearized operator $DΛ: T_{m}\mathcal M → T_{Λ(m)}\mathcal N)$
  • prox_F, prox_G_Dual: the proximal maps of $F$ and $G^\ast_n$

note that depending on the AbstractEvaluationType evaluation the last three parameters as well as the forward operator Λ and the linearized_forward_operator can be given as allocating functions (Manifolds, parameters) -> result or as mutating functions (Manifold, result, parameters) -> result` to spare allocations.

By default, this performs the exact Riemannian Chambolle Pock algorithm, see the optional parameter for their linearized variant.

For more details on the algorithm, see [BHS+21].

Keyword Arguments

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ChambollePockStateType
ChambollePockState <: AbstractPrimalDualSolverState

stores all options and variables within a linearized or exact Chambolle Pock.

Fields

  • acceleration::R: acceleration factor
  • dual_stepsize::R: proximal parameter of the dual prox
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • inverse_retraction_method_dual::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • m::P: base point on $\mathcal M$
  • n::Q: base point on $\mathcal N$
  • p::P: an initial point on $p^{(0)} ∈ \mathcal M$
  • pbar::P: the relaxed iterate used in the next dual update step (when using :primal relaxation)
  • primal_stepsize::R: proximal parameter of the primal prox
  • X::T: an initial tangent vector $X^{(0)} ∈ T_{p^{(0)}}\mathcal M$
  • Xbar::T: the relaxed iterate used in the next primal update step (when using :dual relaxation)
  • relaxation::R: relaxation in the primal relaxation step (to compute pbar:
  • relax::Symbol: which variable to relax (:primalor:dual`:
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • variant: whether to perform an :exact or :linearized Chambolle-Pock
  • update_primal_base: function (pr, st, k) -> m to update the primal base
  • update_dual_base: function (pr, st, k) -> n to update the dual base
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • vector_transport_method_dual::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Here, P is a point type on $\mathcal M$, T its tangent vector type, Q a point type on $\mathcal N$, and R<:Real is a real number type

where for the last two the functions a AbstractManoptProblemp, AbstractManoptSolverStateo and the current iterate i are the arguments. If you activate these to be different from the default identity, you have to provide p.Λ for the algorithm to work (which might be missing in the linearized case).

Constructor

ChambollePockState(M::AbstractManifold, N::AbstractManifold;
     kwargs...
-) where {P, Q, T, R <: Real}

Keyword arguments

if Manifolds.jl is loaded, N is also a keyword argument and set to TangentBundle(M) by default.

source

Useful terms

Manopt.primal_residualFunction
primal_residual(p, o, x_old, X_old, n_old)

Compute the primal residual at current iterate $k$ given the necessary values $x_{k-1}, X_{k-1}$, and $n_{k-1}$ from the previous iterate.

\[\Bigl\lVert +) where {P, Q, T, R <: Real}

Keyword arguments

if Manifolds.jl is loaded, N is also a keyword argument and set to TangentBundle(M) by default.

source

Useful terms

Manopt.primal_residualFunction
primal_residual(p, o, x_old, X_old, n_old)

Compute the primal residual at current iterate $k$ given the necessary values $x_{k-1}, X_{k-1}$, and $n_{k-1}$ from the previous iterate.

\[\Bigl\lVert \frac{1}{σ}\operatorname{retr}^{-1}_{x_{k}}x_{k-1} - V_{x_k\gets m_k}\bigl(DΛ^*(m_k)\bigl[V_{n_k\gets n_{k-1}}X_{k-1} - X_k \bigr] -\Bigr\rVert\]

where $V_{⋅\gets⋅}$ is the vector transport used in the ChambollePockState

source
Manopt.dual_residualFunction
dual_residual(p, o, x_old, X_old, n_old)

Compute the dual residual at current iterate $k$ given the necessary values $x_{k-1}, X_{k-1}$, and $n_{k-1}$ from the previous iterate. The formula is slightly different depending on the o.variant used:

For the :linearized it reads

\[\Bigl\lVert +\Bigr\rVert\]

where $V_{⋅\gets⋅}$ is the vector transport used in the ChambollePockState

source
Manopt.dual_residualFunction
dual_residual(p, o, x_old, X_old, n_old)

Compute the dual residual at current iterate $k$ given the necessary values $x_{k-1}, X_{k-1}$, and $n_{k-1}$ from the previous iterate. The formula is slightly different depending on the o.variant used:

For the :linearized it reads

\[\Bigl\lVert \frac{1}{τ}\bigl( V_{n_{k}\gets n_{k-1}}(X_{k-1}) - X_k @@ -21,4 +21,4 @@ \operatorname{retr}^{-1}_{n_{k}}\bigl( Λ(\operatorname{retr}_{m_{k}}(V_{m_k\gets x_k}\operatorname{retr}^{-1}_{x_{k}}x_{k-1})) \bigr) -\Bigr\rVert\]

where in both cases $V_{⋅\gets⋅}$ is the vector transport used in the ChambollePockState.

source

Debug

Manopt.DebugDualChangeType
DebugDualChange(opts...)

Print the change of the dual variable, similar to DebugChange, see their constructors for detail, but with a different calculation of the change, since the dual variable lives in (possibly different) tangent spaces.

source
Manopt.DebugDualResidualType
DebugDualResidual <: DebugAction

A Debug action to print the dual residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.

Constructor

DebugDualResidual(; kwargs...)

Keyword warguments

  • io=stdout`: stream to perform the debug to
  • format="$prefix%s": format to print the dual residual, using the
  • prefix="Dual Residual: ": short form to just set the prefix
  • storage (a new StoreStateAction) to store values for the debug.
source
Manopt.DebugPrimalResidualType
DebugPrimalResidual <: DebugAction

A Debug action to print the primal residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.

Constructor

DebugPrimalResidual(; kwargs...)

Keyword warguments

  • io=stdout`: stream to perform the debug to
  • format="$prefix%s": format to print the dual residual, using the
  • prefix="Primal Residual: ": short form to just set the prefix
  • storage (a new StoreStateAction) to store values for the debug.
source
Manopt.DebugPrimalDualResidualType
DebugPrimalDualResidual <: DebugAction

A Debug action to print the primal dual residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.

Constructor

DebugPrimalDualResidual()

with the keywords

Keyword warguments

  • io=stdout`: stream to perform the debug to
  • format="$prefix%s": format to print the dual residual, using the
  • prefix="PD Residual: ": short form to just set the prefix
  • storage (a new StoreStateAction) to store values for the debug.
source

Record

Manopt.RecordDualChangeFunction
RecordDualChange()

Create the action either with a given (shared) Storage, which can be set to the values Tuple, if that is provided).

source

Internals

Manopt.update_prox_parameters!Function
update_prox_parameters!(o)

update the prox parameters as described in Algorithm 2 of [CP11],

  1. $θ_{n} = \frac{1}{\sqrt{1+2γτ_n}}$
  2. $τ_{n+1} = θ_nτ_n$
  3. $σ_{n+1} = \frac{σ_n}{θ_n}$
source

Technical details

The ChambollePock solver requires the following functions of a manifold to be available for both the manifold $\mathcal M$and $\mathcal N$

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= or retraction_method_dual= (for $\mathcal N$) does not have to be specified.
  • An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for $\mathcal N$) does not have to be specified.
  • A vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= or vector_transport_method_dual= (for $\mathcal N$) does not have to be specified.
  • A `copyto!(M, q, p) and copy(M,p) for points.

Literature

[BHS+21]
R. Bergmann, R. Herzog, M. Silva Louzeiro, D. Tenbrinck and J. Vidal-Núñez. Fenchel duality theory and a primal-dual algorithm on Riemannian manifolds. Foundations of Computational Mathematics 21, 1465–1504 (2021), arXiv:1908.02022.
[CP11]
A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40, 120–145 (2011).
+\Bigr\rVert\]

where in both cases $V_{⋅\gets⋅}$ is the vector transport used in the ChambollePockState.

source

Debug

Manopt.DebugDualChangeType
DebugDualChange(opts...)

Print the change of the dual variable, similar to DebugChange, see their constructors for detail, but with a different calculation of the change, since the dual variable lives in (possibly different) tangent spaces.

source
Manopt.DebugDualResidualType
DebugDualResidual <: DebugAction

A Debug action to print the dual residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.

Constructor

DebugDualResidual(; kwargs...)

Keyword warguments

  • io=stdout`: stream to perform the debug to
  • format="$prefix%s": format to print the dual residual, using the
  • prefix="Dual Residual: ": short form to just set the prefix
  • storage (a new StoreStateAction) to store values for the debug.
source
Manopt.DebugPrimalResidualType
DebugPrimalResidual <: DebugAction

A Debug action to print the primal residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.

Constructor

DebugPrimalResidual(; kwargs...)

Keyword warguments

  • io=stdout`: stream to perform the debug to
  • format="$prefix%s": format to print the dual residual, using the
  • prefix="Primal Residual: ": short form to just set the prefix
  • storage (a new StoreStateAction) to store values for the debug.
source
Manopt.DebugPrimalDualResidualType
DebugPrimalDualResidual <: DebugAction

A Debug action to print the primal dual residual. The constructor accepts a printing function and some (shared) storage, which should at least record :Iterate, :X and :n.

Constructor

DebugPrimalDualResidual()

with the keywords

Keyword warguments

  • io=stdout`: stream to perform the debug to
  • format="$prefix%s": format to print the dual residual, using the
  • prefix="PD Residual: ": short form to just set the prefix
  • storage (a new StoreStateAction) to store values for the debug.
source

Record

Manopt.RecordDualChangeFunction
RecordDualChange()

Create the action either with a given (shared) Storage, which can be set to the values Tuple, if that is provided).

source

Internals

Manopt.update_prox_parameters!Function
update_prox_parameters!(o)

update the prox parameters as described in Algorithm 2 of [CP11],

  1. $θ_{n} = \frac{1}{\sqrt{1+2γτ_n}}$
  2. $τ_{n+1} = θ_nτ_n$
  3. $σ_{n+1} = \frac{σ_n}{θ_n}$
source

Technical details

The ChambollePock solver requires the following functions of a manifold to be available for both the manifold $\mathcal M$and $\mathcal N$

Literature

[BHS+21]
R. Bergmann, R. Herzog, M. Silva Louzeiro, D. Tenbrinck and J. Vidal-Núñez. Fenchel duality theory and a primal-dual algorithm on Riemannian manifolds. Foundations of Computational Mathematics 21, 1465–1504 (2021), arXiv:1908.02022.
[CP11]
A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision 40, 120–145 (2011).
diff --git a/dev/solvers/DouglasRachford/index.html b/dev/solvers/DouglasRachford/index.html index d29e1ddb8a..7235b0578f 100644 --- a/dev/solvers/DouglasRachford/index.html +++ b/dev/solvers/DouglasRachford/index.html @@ -2,11 +2,11 @@ Douglas—Rachford · Manopt.jl

Douglas—Rachford algorithm

The (Parallel) Douglas—Rachford ((P)DR) algorithm was generalized to Hadamard manifolds in [BPS16].

The aim is to minimize the sum

\[f(p) = g(p) + h(p)\]

on a manifold, where the two summands have proximal maps $\operatorname{prox}_{λ g}, \operatorname{prox}_{λ h}$ that are easy to evaluate (maybe in closed form, or not too costly to approximate). Further, define the reflection operator at the proximal map as

\[\operatorname{refl}_{λ g}(p) = \operatorname{retr}_{\operatorname{prox}_{λ g}(p)} \bigl( -\operatorname{retr}^{-1}_{\operatorname{prox}_{λ g}(p)} p \bigr).\]

Let $\alpha_k ∈ [0,1]$ with $\sum_{k ∈ ℕ} \alpha_k(1-\alpha_k) = \infty$ and $λ > 0$ (which might depend on iteration $k$ as well) be given.

Then the (P)DRA algorithm for initial data $p^{(0)} ∈ \mathcal M$ as

Initialization

Initialize $q^{(0)} = p^{(0)}$ and $k=0$

Iteration

Repeat until a convergence criterion is reached

  1. Compute $r^{(k)} = \operatorname{refl}_{λ g}\operatorname{refl}_{λ h}(q^{(k)})$
  2. Within that operation, store $p^{(k+1)} = \operatorname{prox}_{λ h}(q^{(k)})$ which is the prox the inner reflection reflects at.
  3. Compute $q^{(k+1)} = g(\alpha_k; q^{(k)}, r^{(k)})$, where $g$ is a curve approximating the shortest geodesic, provided by a retraction and its inverse
  4. Set $k = k+1$

Result

The result is given by the last computed $p^{(K)}$ at the last iterate $K$.

For the parallel version, the first proximal map is a vectorial version where in each component one prox is applied to the corresponding copy of $t_k$ and the second proximal map corresponds to the indicator function of the set, where all copies are equal (in $\mathcal M^n$, where $n$ is the number of copies), leading to the second prox being the Riemannian mean.

Interface

Manopt.DouglasRachfordFunction
DouglasRachford(M, f, proxes_f, p)
 DouglasRachford(M, mpo, p)
 DouglasRachford!(M, f, proxes_f, p)
-DouglasRachford!(M, mpo, p)

Compute the Douglas-Rachford algorithm on the manifold $\mathcal M$, starting from pgiven the (two) proximal mapsproxes_f`, see [BPS16].

For $k>2$ proximal maps, the problem is reformulated using the parallel Douglas Rachford: a vectorial proximal map on the power manifold $\mathcal M^k$ is introduced as the first proximal map and the second proximal map of the is set to the mean (Riemannian center of mass). This hence also boils down to two proximal maps, though each evaluates proximal maps in parallel, that is, component wise in a vector.

Note

The parallel Douglas Rachford does not work in-place for now, since while creating the new staring point p' on the power manifold, a copy of p Is created

If you provide a ManifoldProximalMapObjective mpo instead, the proximal maps are kept unchanged.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • proxes_f: functions of the form (M, λ, p)-> q performing a proximal maps, where ⁠λ denotes the proximal parameter, for each of the summands of F. These can also be given in the InplaceEvaluation variants (M, q, λ p) -> q computing in place of q.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • α= k -> 0.9: relaxation of the step from old to new iterate, to be precise $p^{(k+1)} = g(α_k; p^{(k)}, q^{(k)})$, where $q^{(k)}$ is the result of the double reflection involved in the DR algorithm and $g$ is a curve induced by the retraction and its inverse.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses This is used both in the relaxation step as well as in the reflection, unless you set R yourself.
  • λ= k -> 1.0: function to provide the value for the proximal parameter $λ_k$
  • R=reflect(!): method employed in the iteration to perform the reflection of p at the prox of p. This uses by default reflect or reflect! depending on reflection_evaluation and the retraction and inverse retraction specified by retraction_method and inverse_retraction_method, respectively.
  • reflection_evaluation: (AllocatingEvaluation whether R works in-place or allocating
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions This is used both in the relaxation step as well as in the reflection, unless you set R yourself.
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-5): a functor indicating that the stopping criterion is fulfilled
  • parallel=false: indicate whether to use a parallel Douglas-Rachford or not.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
DouglasRachford(M, f, proxes_f, p; kwargs...)

a doc string with some math $t_{k+1} = g(α_k; t_k, s_k)$

source
Manopt.DouglasRachford!Function
DouglasRachford(M, f, proxes_f, p)
+DouglasRachford!(M, mpo, p)

Compute the Douglas-Rachford algorithm on the manifold $\mathcal M$, starting from pgiven the (two) proximal mapsproxes_f`, see [BPS16].

For $k>2$ proximal maps, the problem is reformulated using the parallel Douglas Rachford: a vectorial proximal map on the power manifold $\mathcal M^k$ is introduced as the first proximal map and the second proximal map of the is set to the mean (Riemannian center of mass). This hence also boils down to two proximal maps, though each evaluates proximal maps in parallel, that is, component wise in a vector.

Note

The parallel Douglas Rachford does not work in-place for now, since while creating the new staring point p' on the power manifold, a copy of p Is created

If you provide a ManifoldProximalMapObjective mpo instead, the proximal maps are kept unchanged.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • proxes_f: functions of the form (M, λ, p)-> q performing a proximal maps, where ⁠λ denotes the proximal parameter, for each of the summands of F. These can also be given in the InplaceEvaluation variants (M, q, λ p) -> q computing in place of q.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • α= k -> 0.9: relaxation of the step from old to new iterate, to be precise $p^{(k+1)} = g(α_k; p^{(k)}, q^{(k)})$, where $q^{(k)}$ is the result of the double reflection involved in the DR algorithm and $g$ is a curve induced by the retraction and its inverse.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses This is used both in the relaxation step as well as in the reflection, unless you set R yourself.
  • λ= k -> 1.0: function to provide the value for the proximal parameter $λ_k$
  • R=reflect(!): method employed in the iteration to perform the reflection of p at the prox of p. This uses by default reflect or reflect! depending on reflection_evaluation and the retraction and inverse retraction specified by retraction_method and inverse_retraction_method, respectively.
  • reflection_evaluation: (AllocatingEvaluation whether R works in-place or allocating
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions This is used both in the relaxation step as well as in the reflection, unless you set R yourself.
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-5): a functor indicating that the stopping criterion is fulfilled
  • parallel=false: indicate whether to use a parallel Douglas-Rachford or not.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
DouglasRachford(M, f, proxes_f, p; kwargs...)

a doc string with some math $t_{k+1} = g(α_k; t_k, s_k)$

source
Manopt.DouglasRachford!Function
DouglasRachford(M, f, proxes_f, p)
 DouglasRachford(M, mpo, p)
 DouglasRachford!(M, f, proxes_f, p)
-DouglasRachford!(M, mpo, p)

Compute the Douglas-Rachford algorithm on the manifold $\mathcal M$, starting from pgiven the (two) proximal mapsproxes_f`, see [BPS16].

For $k>2$ proximal maps, the problem is reformulated using the parallel Douglas Rachford: a vectorial proximal map on the power manifold $\mathcal M^k$ is introduced as the first proximal map and the second proximal map of the is set to the mean (Riemannian center of mass). This hence also boils down to two proximal maps, though each evaluates proximal maps in parallel, that is, component wise in a vector.

Note

The parallel Douglas Rachford does not work in-place for now, since while creating the new staring point p' on the power manifold, a copy of p Is created

If you provide a ManifoldProximalMapObjective mpo instead, the proximal maps are kept unchanged.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • proxes_f: functions of the form (M, λ, p)-> q performing a proximal maps, where ⁠λ denotes the proximal parameter, for each of the summands of F. These can also be given in the InplaceEvaluation variants (M, q, λ p) -> q computing in place of q.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • α= k -> 0.9: relaxation of the step from old to new iterate, to be precise $p^{(k+1)} = g(α_k; p^{(k)}, q^{(k)})$, where $q^{(k)}$ is the result of the double reflection involved in the DR algorithm and $g$ is a curve induced by the retraction and its inverse.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses This is used both in the relaxation step as well as in the reflection, unless you set R yourself.
  • λ= k -> 1.0: function to provide the value for the proximal parameter $λ_k$
  • R=reflect(!): method employed in the iteration to perform the reflection of p at the prox of p. This uses by default reflect or reflect! depending on reflection_evaluation and the retraction and inverse retraction specified by retraction_method and inverse_retraction_method, respectively.
  • reflection_evaluation: (AllocatingEvaluation whether R works in-place or allocating
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions This is used both in the relaxation step as well as in the reflection, unless you set R yourself.
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-5): a functor indicating that the stopping criterion is fulfilled
  • parallel=false: indicate whether to use a parallel Douglas-Rachford or not.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.DouglasRachfordStateType
DouglasRachfordState <: AbstractManoptSolverState

Store all options required for the DouglasRachford algorithm,

Fields

  • α: relaxation of the step from old to new iterate, to be precise $x^{(k+1)} = g(α(k); x^{(k)}, t^{(k)})$, where $t^{(k)}$ is the result of the double reflection involved in the DR algorithm
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • λ: function to provide the value for the proximal parameter during the calls
  • parallel: indicate whether to use a parallel Douglas-Rachford or not.
  • R: method employed in the iteration to perform the reflection of x at the prox p.
  • p::P: a point on the manifold $\mathcal M$storing the current iterate For the parallel Douglas-Rachford, this is not a value from the PowerManifold manifold but the mean.
  • reflection_evaluation: whether R works in-place or allocating
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • s: the last result of the double reflection at the proximal maps relaxed by α.
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled

Constructor

DouglasRachfordState(M::AbstractManifold; kwargs...)

Input

Keyword arguments

  • α= k -> 0.9: relaxation of the step from old to new iterate, to be precise $x^{(k+1)} = g(α(k); x^{(k)}, t^{(k)})$, where $t^{(k)}$ is the result of the double reflection involved in the DR algorithm
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • λ= k -> 1.0: function to provide the value for the proximal parameter during the calls
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • R=reflect(!): method employed in the iteration to perform the reflection of p at the prox of p, which function is used depends on reflection_evaluation.
  • reflection_evaluation=AllocatingEvaluation()) specify whether the reflection works in-place or allocating (default)
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(300): a functor indicating that the stopping criterion is fulfilled
  • parallel=false: indicate whether to use a parallel Douglas-Rachford or not.
source

For specific DebugActions and RecordActions see also Cyclic Proximal Point.

Furthermore, this solver has a short hand notation for the involved reflection.

Manopt.reflectFunction
reflect(M, f, x; kwargs...)
-reflect!(M, q, f, x; kwargs...)

reflect the point x from the manifold M at the point f(x) of the function $f: \mathcal M → \mathcal M$, given by

\[ \operatorname{refl}_f(x) = \operatorname{refl}_{f(x)}(x),\]

Compute the result in q.

see also reflect(M,p,x), to which the keywords are also passed to.

source
reflect(M, p, x, kwargs...)
-reflect!(M, q, p, x, kwargs...)

Reflect the point x from the manifold M at point p, given by

\[\operatorname{refl}\]

where $\operatorname{retr}$ and $\operatorname{retr}^{-1}$ denote a retraction and an inverse retraction, respectively. This can also be done in place of q.

Keyword arguments

and for the reflect! additionally

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$ as temporary memory to compute the inverse retraction in place. otherwise this is the memory that would be allocated anyways.
source
reflect(M, f, x; kwargs...)
-reflect!(M, q, f, x; kwargs...)

reflect the point x from the manifold M at the point f(x) of the function $f: \mathcal M → \mathcal M$, given by

\[ \operatorname{refl}_f(x) = \operatorname{refl}_{f(x)}(x),\]

Compute the result in q.

see also reflect(M,p,x), to which the keywords are also passed to.

source
reflect(M, p, x, kwargs...)
-reflect!(M, q, p, x, kwargs...)

Reflect the point x from the manifold M at point p, given by

\[\operatorname{refl}_p(q) = \operatorname{retr}_p(-\operatorname{retr}^{-1}_p q),\]

where $\operatorname{retr}$ and $\operatorname{retr}^{-1}$ denote a retraction and an inverse retraction, respectively.

This can also be done in place of q.

Keyword arguments

and for the reflect! additionally

  • X=zero_vector(M,p): a temporary memory to compute the inverse retraction in place. otherwise this is the memory that would be allocated anyways.
source

Technical details

The DouglasRachford solver requires the following functions of a manifold to be available

By default, one of the stopping criteria is StopWhenChangeLess, which requires

  • An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for $\mathcal N$) does not have to be specified or the distance(M, p, q) for said default inverse retraction.

Literature

+DouglasRachford!(M, mpo, p)

Compute the Douglas-Rachford algorithm on the manifold $\mathcal M$, starting from pgiven the (two) proximal mapsproxes_f`, see [BPS16].

For $k>2$ proximal maps, the problem is reformulated using the parallel Douglas Rachford: a vectorial proximal map on the power manifold $\mathcal M^k$ is introduced as the first proximal map and the second proximal map of the is set to the mean (Riemannian center of mass). This hence also boils down to two proximal maps, though each evaluates proximal maps in parallel, that is, component wise in a vector.

Note

The parallel Douglas Rachford does not work in-place for now, since while creating the new staring point p' on the power manifold, a copy of p Is created

If you provide a ManifoldProximalMapObjective mpo instead, the proximal maps are kept unchanged.

Input

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.DouglasRachfordStateType
DouglasRachfordState <: AbstractManoptSolverState

Store all options required for the DouglasRachford algorithm,

Fields

  • α: relaxation of the step from old to new iterate, to be precise $x^{(k+1)} = g(α(k); x^{(k)}, t^{(k)})$, where $t^{(k)}$ is the result of the double reflection involved in the DR algorithm
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • λ: function to provide the value for the proximal parameter during the calls
  • parallel: indicate whether to use a parallel Douglas-Rachford or not.
  • R: method employed in the iteration to perform the reflection of x at the prox p.
  • p::P: a point on the manifold $\mathcal M$storing the current iterate For the parallel Douglas-Rachford, this is not a value from the PowerManifold manifold but the mean.
  • reflection_evaluation: whether R works in-place or allocating
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • s: the last result of the double reflection at the proximal maps relaxed by α.
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled

Constructor

DouglasRachfordState(M::AbstractManifold; kwargs...)

Input

Keyword arguments

  • α= k -> 0.9: relaxation of the step from old to new iterate, to be precise $x^{(k+1)} = g(α(k); x^{(k)}, t^{(k)})$, where $t^{(k)}$ is the result of the double reflection involved in the DR algorithm
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • λ= k -> 1.0: function to provide the value for the proximal parameter during the calls
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • R=reflect(!): method employed in the iteration to perform the reflection of p at the prox of p, which function is used depends on reflection_evaluation.
  • reflection_evaluation=AllocatingEvaluation()) specify whether the reflection works in-place or allocating (default)
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(300): a functor indicating that the stopping criterion is fulfilled
  • parallel=false: indicate whether to use a parallel Douglas-Rachford or not.
source

For specific DebugActions and RecordActions see also Cyclic Proximal Point.

Furthermore, this solver has a short hand notation for the involved reflection.

Manopt.reflectFunction
reflect(M, f, x; kwargs...)
+reflect!(M, q, f, x; kwargs...)

reflect the point x from the manifold M at the point f(x) of the function $f: \mathcal M → \mathcal M$, given by

\[ \operatorname{refl}_f(x) = \operatorname{refl}_{f(x)}(x),\]

Compute the result in q.

see also reflect(M,p,x), to which the keywords are also passed to.

source
reflect(M, p, x, kwargs...)
+reflect!(M, q, p, x, kwargs...)

Reflect the point x from the manifold M at point p, given by

\[\operatorname{refl}\]

where $\operatorname{retr}$ and $\operatorname{retr}^{-1}$ denote a retraction and an inverse retraction, respectively. This can also be done in place of q.

Keyword arguments

and for the reflect! additionally

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$ as temporary memory to compute the inverse retraction in place. otherwise this is the memory that would be allocated anyways.
source
reflect(M, f, x; kwargs...)
+reflect!(M, q, f, x; kwargs...)

reflect the point x from the manifold M at the point f(x) of the function $f: \mathcal M → \mathcal M$, given by

\[ \operatorname{refl}_f(x) = \operatorname{refl}_{f(x)}(x),\]

Compute the result in q.

see also reflect(M,p,x), to which the keywords are also passed to.

source
reflect(M, p, x, kwargs...)
+reflect!(M, q, p, x, kwargs...)

Reflect the point x from the manifold M at point p, given by

\[\operatorname{refl}_p(q) = \operatorname{retr}_p(-\operatorname{retr}^{-1}_p q),\]

where $\operatorname{retr}$ and $\operatorname{retr}^{-1}$ denote a retraction and an inverse retraction, respectively.

This can also be done in place of q.

Keyword arguments

and for the reflect! additionally

  • X=zero_vector(M,p): a temporary memory to compute the inverse retraction in place. otherwise this is the memory that would be allocated anyways.
source

Technical details

The DouglasRachford solver requires the following functions of a manifold to be available

By default, one of the stopping criteria is StopWhenChangeLess, which requires

Literature

diff --git a/dev/solvers/FrankWolfe/index.html b/dev/solvers/FrankWolfe/index.html index 5bc9a8016f..9a2eacf76b 100644 --- a/dev/solvers/FrankWolfe/index.html +++ b/dev/solvers/FrankWolfe/index.html @@ -2,7 +2,7 @@ Frank-Wolfe · Manopt.jl

Frank—Wolfe method

Manopt.Frank_Wolfe_methodFunction
Frank_Wolfe_method(M, f, grad_f, p=rand(M))
 Frank_Wolfe_method(M, gradient_objective, p=rand(M); kwargs...)
 Frank_Wolfe_method!(M, f, grad_f, p; kwargs...)
-Frank_Wolfe_method!(M, gradient_objective, p; kwargs...)

Perform the Frank-Wolfe algorithm to compute for $\mathcal C ⊂ \mathcal M$ the constrained problem

\[ \operatorname*{arg\,min}_{p∈\mathcal C} f(p),\]

where the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)

\[ \operatorname*{arg\,min}_{q ∈ C} ⟨\operatorname{grad} f(p_k), \log_{p_k}q⟩.\]

for every iterate $p_k$ together with a stepsize $s_k≤1$. The algorhtm can be performed in-place of p.

This algorithm is inspired by but slightly more general than [WS22].

The next iterate is then given by $p_{k+1} = γ_{p_k,q_k}(s_k)$, where by default $γ$ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Alternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.

  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions

  • stepsize=DecreasingStepsize(; length=2.0, shift=2): a functor inheriting from Stepsize to determine a step size

  • stopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1.0e-6)): a functor indicating that the stopping criterion is fulfilled

  • sub_cost=FrankWolfeCost(p, X): the cost of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.

  • sub_grad=FrankWolfeGradient(p, X): the gradient of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.

  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.

  • sub_objective=ManifoldGradientObjective(sub_cost, sub_gradient): the objective for the Frank-Wolfe sub problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.

  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.

  • sub_state=GradientDescentState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate

  • sub_stopping_criterion=[StopAfterIteration](@ref)(300)[ | ](@ref StopWhenAny)[StopWhenStepsizeLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source
Manopt.Frank_Wolfe_method!Function
Frank_Wolfe_method(M, f, grad_f, p=rand(M))
+Frank_Wolfe_method!(M, gradient_objective, p; kwargs...)

Perform the Frank-Wolfe algorithm to compute for $\mathcal C ⊂ \mathcal M$ the constrained problem

\[ \operatorname*{arg\,min}_{p∈\mathcal C} f(p),\]

where the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)

\[ \operatorname*{arg\,min}_{q ∈ C} ⟨\operatorname{grad} f(p_k), \log_{p_k}q⟩.\]

for every iterate $p_k$ together with a stepsize $s_k≤1$. The algorhtm can be performed in-place of p.

This algorithm is inspired by but slightly more general than [WS22].

The next iterate is then given by $p_{k+1} = γ_{p_k,q_k}(s_k)$, where by default $γ$ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Alternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.

  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions

  • stepsize=DecreasingStepsize(; length=2.0, shift=2): a functor inheriting from Stepsize to determine a step size

  • stopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1.0e-6)): a functor indicating that the stopping criterion is fulfilled

  • sub_cost=FrankWolfeCost(p, X): the cost of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.

  • sub_grad=FrankWolfeGradient(p, X): the gradient of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.

  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.

  • sub_objective=ManifoldGradientObjective(sub_cost, sub_gradient): the objective for the Frank-Wolfe sub problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.

  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.

  • sub_state=GradientDescentState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate

  • sub_stopping_criterion=[StopAfterIteration](@ref)(300)[ | ](@ref StopWhenAny)[StopWhenStepsizeLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source
Manopt.Frank_Wolfe_method!Function
Frank_Wolfe_method(M, f, grad_f, p=rand(M))
 Frank_Wolfe_method(M, gradient_objective, p=rand(M); kwargs...)
 Frank_Wolfe_method!(M, f, grad_f, p; kwargs...)
-Frank_Wolfe_method!(M, gradient_objective, p; kwargs...)

Perform the Frank-Wolfe algorithm to compute for $\mathcal C ⊂ \mathcal M$ the constrained problem

\[ \operatorname*{arg\,min}_{p∈\mathcal C} f(p),\]

where the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)

\[ \operatorname*{arg\,min}_{q ∈ C} ⟨\operatorname{grad} f(p_k), \log_{p_k}q⟩.\]

for every iterate $p_k$ together with a stepsize $s_k≤1$. The algorhtm can be performed in-place of p.

This algorithm is inspired by but slightly more general than [WS22].

The next iterate is then given by $p_{k+1} = γ_{p_k,q_k}(s_k)$, where by default $γ$ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Alternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.

  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions

  • stepsize=DecreasingStepsize(; length=2.0, shift=2): a functor inheriting from Stepsize to determine a step size

  • stopping_criterion=StopAfterIteration(500)|StopWhenGradientNormLess(1.0e-6)): a functor indicating that the stopping criterion is fulfilled

  • sub_cost=FrankWolfeCost(p, X): the cost of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.

  • sub_grad=FrankWolfeGradient(p, X): the gradient of the Frank-Wolfe sub problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.

  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.

  • sub_objective=ManifoldGradientObjective(sub_cost, sub_gradient): the objective for the Frank-Wolfe sub problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.

  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.

  • sub_state=GradientDescentState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate

  • sub_stopping_criterion=[StopAfterIteration](@ref)(300)[ | ](@ref StopWhenAny)[StopWhenStepsizeLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source

State

Manopt.FrankWolfeStateType
FrankWolfeState <: AbstractManoptSolverState

A struct to store the current state of the Frank_Wolfe_method

It comes in two forms, depending on the realisation of the subproblem.

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions

The sub task requires a method to solve

\[ \operatorname*{arg\,min}_{q ∈ C} ⟨\operatorname{grad} f(p_k), \log_{p_k}q⟩.\]

Constructor

FrankWolfeState(M, sub_problem, sub_state; kwargs...)

Initialise the Frank Wolfe method state.

FrankWolfeState(M, sub_problem; evaluation=AllocatingEvaluation(), kwargs...)

Initialise the Frank Wolfe method state, where sub_problem is a closed form solution with evaluation as type of evaluation.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • sub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Keyword arguments

where the remaining fields from before are keyword arguments.

source

Helpers

For the inner sub-problem you can easily create the corresponding cost and gradient using

Manopt.FrankWolfeCostType
FrankWolfeCost{P,T}

A structure to represent the oracle sub problem in the Frank_Wolfe_method. The cost function reads

\[F(q) = ⟨X, \log_p q⟩\]

The values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.

source
Manopt.FrankWolfeGradientType
FrankWolfeGradient{P,T}

A structure to represent the gradient of the oracle sub problem in the Frank_Wolfe_method, that is for a given point p and a tangent vector X the function reads

\[F(q) = ⟨X, \log_p q⟩\]

Its gradient can be computed easily using adjoint_differential_log_argument.

The values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.

source
[WS22]
M. Weber and S. Sra. Riemannian Optimization via Frank-Wolfe Methods. Mathematical Programming 199, 525–556 (2022).
+Frank_Wolfe_method!(M, gradient_objective, p; kwargs...)

Perform the Frank-Wolfe algorithm to compute for $\mathcal C ⊂ \mathcal M$ the constrained problem

\[ \operatorname*{arg\,min}_{p∈\mathcal C} f(p),\]

where the main step is a constrained optimisation is within the algorithm, that is the sub problem (Oracle)

\[ \operatorname*{arg\,min}_{q ∈ C} ⟨\operatorname{grad} f(p_k), \log_{p_k}q⟩.\]

for every iterate $p_k$ together with a stepsize $s_k≤1$. The algorhtm can be performed in-place of p.

This algorithm is inspired by but slightly more general than [WS22].

The next iterate is then given by $p_{k+1} = γ_{p_k,q_k}(s_k)$, where by default $γ$ is the shortest geodesic between the two points but can also be changed to use a retraction and its inverse.

Input

Alternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source

State

Manopt.FrankWolfeStateType
FrankWolfeState <: AbstractManoptSolverState

A struct to store the current state of the Frank_Wolfe_method

It comes in two forms, depending on the realisation of the subproblem.

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions

The sub task requires a method to solve

\[ \operatorname*{arg\,min}_{q ∈ C} ⟨\operatorname{grad} f(p_k), \log_{p_k}q⟩.\]

Constructor

FrankWolfeState(M, sub_problem, sub_state; kwargs...)

Initialise the Frank Wolfe method state.

FrankWolfeState(M, sub_problem; evaluation=AllocatingEvaluation(), kwargs...)

Initialise the Frank Wolfe method state, where sub_problem is a closed form solution with evaluation as type of evaluation.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • sub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Keyword arguments

where the remaining fields from before are keyword arguments.

source

Helpers

For the inner sub-problem you can easily create the corresponding cost and gradient using

Manopt.FrankWolfeCostType
FrankWolfeCost{P,T}

A structure to represent the oracle sub problem in the Frank_Wolfe_method. The cost function reads

\[F(q) = ⟨X, \log_p q⟩\]

The values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.

source
Manopt.FrankWolfeGradientType
FrankWolfeGradient{P,T}

A structure to represent the gradient of the oracle sub problem in the Frank_Wolfe_method, that is for a given point p and a tangent vector X the function reads

\[F(q) = ⟨X, \log_p q⟩\]

Its gradient can be computed easily using adjoint_differential_log_argument.

The values p and X are stored within this functor and should be references to the iterate and gradient from within FrankWolfeState.

source
[WS22]
M. Weber and S. Sra. Riemannian Optimization via Frank-Wolfe Methods. Mathematical Programming 199, 525–556 (2022).
diff --git a/dev/solvers/LevenbergMarquardt/index.html b/dev/solvers/LevenbergMarquardt/index.html index 0b5388e5e4..762b10b6e3 100644 --- a/dev/solvers/LevenbergMarquardt/index.html +++ b/dev/solvers/LevenbergMarquardt/index.html @@ -1,4 +1,4 @@ Levenberg–Marquardt · Manopt.jl

Levenberg-Marquardt

Manopt.LevenbergMarquardtFunction
LevenbergMarquardt(M, f, jacobian_f, p, num_components=-1)
-LevenbergMarquardt!(M, f, jacobian_f, p, num_components=-1; kwargs...)

Solve an optimization problem of the form

\[\operatorname*{arg\,min}_{p ∈ \mathcal M} \frac{1}{2} \lVert f(p) \rVert^2,\]

where $f: \mathcal M → ℝ^d$ is a continuously differentiable function, using the Riemannian Levenberg-Marquardt algorithm [Pee93]. The implementation follows Algorithm 1 [AOT22]. The second signature performs the optimization in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ℝ^d$
  • jacobian_f: the Jacobian of $f$. The Jacobian is supposed to accept a keyword argument basis_domain which specifies basis of the tangent space at a given point in which the Jacobian is to be calculated. By default it should be the DefaultOrthonormalBasis.
  • p: a point on the manifold $\mathcal M$
  • num_components: length of the vector returned by the cost function (d). By default its value is -1 which means that it is determined automatically by calling f one additional time. This is only possible when evaluation is AllocatingEvaluation, for mutating evaluation this value must be explicitly specified.

These can also be passed as a NonlinearLeastSquaresObjective, then the keyword jacobian_tangent_basis below is ignored

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • η=0.2: scaling factor for the sufficient cost decrease threshold required to accept new proposal points. Allowed range: 0 < η < 1.
  • expect_zero_residual=false: whether or not the algorithm might expect that the value of residual (objective) at minimum is equal to 0.
  • damping_term_min=0.1: initial (and also minimal) value of the damping term
  • β=5.0: parameter by which the damping term is multiplied when the current new point is rejected
  • initial_jacobian_f: the initial Jacobian of the cost function f. By default this is a matrix of size num_components times the manifold dimension of similar type as p.
  • initial_residual_values: the initial residual vector of the cost function f. By default this is a vector of length num_components of similar type as p.
  • jacobian_tangent_basis: an AbstractBasis specify the basis of the tangent space for jacobian_f.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-12): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.LevenbergMarquardt!Function
LevenbergMarquardt(M, f, jacobian_f, p, num_components=-1)
-LevenbergMarquardt!(M, f, jacobian_f, p, num_components=-1; kwargs...)

Solve an optimization problem of the form

\[\operatorname*{arg\,min}_{p ∈ \mathcal M} \frac{1}{2} \lVert f(p) \rVert^2,\]

where $f: \mathcal M → ℝ^d$ is a continuously differentiable function, using the Riemannian Levenberg-Marquardt algorithm [Pee93]. The implementation follows Algorithm 1 [AOT22]. The second signature performs the optimization in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ℝ^d$
  • jacobian_f: the Jacobian of $f$. The Jacobian is supposed to accept a keyword argument basis_domain which specifies basis of the tangent space at a given point in which the Jacobian is to be calculated. By default it should be the DefaultOrthonormalBasis.
  • p: a point on the manifold $\mathcal M$
  • num_components: length of the vector returned by the cost function (d). By default its value is -1 which means that it is determined automatically by calling f one additional time. This is only possible when evaluation is AllocatingEvaluation, for mutating evaluation this value must be explicitly specified.

These can also be passed as a NonlinearLeastSquaresObjective, then the keyword jacobian_tangent_basis below is ignored

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • η=0.2: scaling factor for the sufficient cost decrease threshold required to accept new proposal points. Allowed range: 0 < η < 1.
  • expect_zero_residual=false: whether or not the algorithm might expect that the value of residual (objective) at minimum is equal to 0.
  • damping_term_min=0.1: initial (and also minimal) value of the damping term
  • β=5.0: parameter by which the damping term is multiplied when the current new point is rejected
  • initial_jacobian_f: the initial Jacobian of the cost function f. By default this is a matrix of size num_components times the manifold dimension of similar type as p.
  • initial_residual_values: the initial residual vector of the cost function f. By default this is a vector of length num_components of similar type as p.
  • jacobian_tangent_basis: an AbstractBasis specify the basis of the tangent space for jacobian_f.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-12): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

Options

Manopt.LevenbergMarquardtStateType
LevenbergMarquardtState{P,T} <: AbstractGradientSolverState

Describes a Gradient based descent algorithm, with

Fields

A default value is given in brackets if a parameter can be left out in initialization.

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • residual_values: value of $F$ calculated in the solver setup or the previous iteration
  • residual_values_temp: value of $F$ for the current proposal point
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • jacF: the current Jacobian of $F$
  • gradient: the current gradient of $F$
  • step_vector: the tangent vector at x that is used to move to the next point
  • last_stepsize: length of step_vector
  • η: Scaling factor for the sufficient cost decrease threshold required to accept new proposal points. Allowed range: 0 < η < 1.
  • damping_term: current value of the damping term
  • damping_term_min: initial (and also minimal) value of the damping term
  • β: parameter by which the damping term is multiplied when the current new point is rejected
  • expect_zero_residual: if true, the algorithm expects that the value of the residual (objective) at minimum is equal to 0.

Constructor

LevenbergMarquardtState(M, initial_residual_values, initial_jacF; kwargs...)

Generate the Levenberg-Marquardt solver state.

Keyword arguments

The following fields are keyword arguments

See also

gradient_descent, LevenbergMarquardt

source

Technical details

The LevenbergMarquardt solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.
  • the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.
  • A `copyto!(M, q, p) and copy(M,p) for points.

Literature

[AOT22]
S. Adachi, T. Okuno and A. Takeda. Riemannian Levenberg-Marquardt Method with Global and Local Convergence Properties. ArXiv Preprint (2022).
[Pee93]
R. Peeters. On a Riemannian version of the Levenberg-Marquardt algorithm. Serie Research Memoranda 0011 (VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics, 1993).
+LevenbergMarquardt!(M, f, jacobian_f, p, num_components=-1; kwargs...)

Solve an optimization problem of the form

\[\operatorname*{arg\,min}_{p ∈ \mathcal M} \frac{1}{2} \lVert f(p) \rVert^2,\]

where $f: \mathcal M → ℝ^d$ is a continuously differentiable function, using the Riemannian Levenberg-Marquardt algorithm [Pee93]. The implementation follows Algorithm 1 [AOT22]. The second signature performs the optimization in-place of p.

Input

These can also be passed as a NonlinearLeastSquaresObjective, then the keyword jacobian_tangent_basis below is ignored

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.LevenbergMarquardt!Function
LevenbergMarquardt(M, f, jacobian_f, p, num_components=-1)
+LevenbergMarquardt!(M, f, jacobian_f, p, num_components=-1; kwargs...)

Solve an optimization problem of the form

\[\operatorname*{arg\,min}_{p ∈ \mathcal M} \frac{1}{2} \lVert f(p) \rVert^2,\]

where $f: \mathcal M → ℝ^d$ is a continuously differentiable function, using the Riemannian Levenberg-Marquardt algorithm [Pee93]. The implementation follows Algorithm 1 [AOT22]. The second signature performs the optimization in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ℝ^d$
  • jacobian_f: the Jacobian of $f$. The Jacobian is supposed to accept a keyword argument basis_domain which specifies basis of the tangent space at a given point in which the Jacobian is to be calculated. By default it should be the DefaultOrthonormalBasis.
  • p: a point on the manifold $\mathcal M$
  • num_components: length of the vector returned by the cost function (d). By default its value is -1 which means that it is determined automatically by calling f one additional time. This is only possible when evaluation is AllocatingEvaluation, for mutating evaluation this value must be explicitly specified.

These can also be passed as a NonlinearLeastSquaresObjective, then the keyword jacobian_tangent_basis below is ignored

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • η=0.2: scaling factor for the sufficient cost decrease threshold required to accept new proposal points. Allowed range: 0 < η < 1.
  • expect_zero_residual=false: whether or not the algorithm might expect that the value of residual (objective) at minimum is equal to 0.
  • damping_term_min=0.1: initial (and also minimal) value of the damping term
  • β=5.0: parameter by which the damping term is multiplied when the current new point is rejected
  • initial_jacobian_f: the initial Jacobian of the cost function f. By default this is a matrix of size num_components times the manifold dimension of similar type as p.
  • initial_residual_values: the initial residual vector of the cost function f. By default this is a vector of length num_components of similar type as p.
  • jacobian_tangent_basis: an AbstractBasis specify the basis of the tangent space for jacobian_f.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(1e-12): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

Options

Manopt.LevenbergMarquardtStateType
LevenbergMarquardtState{P,T} <: AbstractGradientSolverState

Describes a Gradient based descent algorithm, with

Fields

A default value is given in brackets if a parameter can be left out in initialization.

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • residual_values: value of $F$ calculated in the solver setup or the previous iteration
  • residual_values_temp: value of $F$ for the current proposal point
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • jacF: the current Jacobian of $F$
  • gradient: the current gradient of $F$
  • step_vector: the tangent vector at x that is used to move to the next point
  • last_stepsize: length of step_vector
  • η: Scaling factor for the sufficient cost decrease threshold required to accept new proposal points. Allowed range: 0 < η < 1.
  • damping_term: current value of the damping term
  • damping_term_min: initial (and also minimal) value of the damping term
  • β: parameter by which the damping term is multiplied when the current new point is rejected
  • expect_zero_residual: if true, the algorithm expects that the value of the residual (objective) at minimum is equal to 0.

Constructor

LevenbergMarquardtState(M, initial_residual_values, initial_jacF; kwargs...)

Generate the Levenberg-Marquardt solver state.

Keyword arguments

The following fields are keyword arguments

See also

gradient_descent, LevenbergMarquardt

source

Technical details

The LevenbergMarquardt solver requires the following functions of a manifold to be available

Literature

[AOT22]
S. Adachi, T. Okuno and A. Takeda. Riemannian Levenberg-Marquardt Method with Global and Local Convergence Properties. ArXiv Preprint (2022).
[Pee93]
R. Peeters. On a Riemannian version of the Levenberg-Marquardt algorithm. Serie Research Memoranda 0011 (VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics, 1993).
diff --git a/dev/solvers/NelderMead/index.html b/dev/solvers/NelderMead/index.html index 8fdfdecec8..58a9e91d9f 100644 --- a/dev/solvers/NelderMead/index.html +++ b/dev/solvers/NelderMead/index.html @@ -2,13 +2,13 @@ Nelder–Mead · Manopt.jl

Nelder Mead method

Manopt.NelderMeadFunction
NelderMead(M::AbstractManifold, f, population=NelderMeadSimplex(M))
 NelderMead(M::AbstractManifold, mco::AbstractManifoldCostObjective, population=NelderMeadSimplex(M))
 NelderMead!(M::AbstractManifold, f, population)
-NelderMead!(M::AbstractManifold, mco::AbstractManifoldCostObjective, population)

Solve a Nelder-Mead minimization problem for the cost function $f: \mathcal M → ℝ$ on the manifold M. If the initial NelderMeadSimplex is not provided, a random set of points is chosen. The compuation can be performed in-place of the population.

The algorithm consists of the following steps. Let $d$ denote the dimension of the manifold $\mathcal M$.

  1. Order the simplex vertices $p_i, i=1,…,d+1$ by increasing cost, such that we have $f(p_1) ≤ f(p_2) ≤ … ≤ f(p_{d+1})$.
  2. Compute the Riemannian center of mass [Kar77], cf. mean, $p_{\text{m}}$ of the simplex vertices $p_1,…,p_{d+1}$.
  3. Reflect the point with the worst point at the mean $p_{\text{r}} = \operatorname{retr}_{p_{\text{m}}}\bigl( - α\operatorname{retr}^{-1}_{p_{\text{m}}} (p_{d+1}) \bigr)$ If $f(p_1) ≤ f(p_{\text{r}}) ≤ f(p_{d})$ then set $p_{d+1} = p_{\text{r}}$ and go to step 1.
  4. Expand the simplex if $f(p_{\text{r}}) < f(p_1)$ by computing the expantion point $p_{\text{e}} = \operatorname{retr}_{p_{\text{m}}}\bigl( - γα\operatorname{retr}^{-1}_{p_{\text{m}}} (p_{d+1}) \bigr)$, which in this formulation allows to reuse the tangent vector from the inverse retraction from before. If $f(p_{\text{e}}) < f(p_{\text{r}})$ then set $p_{d+1} = p_{\text{e}}$ otherwise set set $p_{d+1} = p_{\text{r}}$. Then go to Step 1.
  5. Contract the simplex if $f(p_{\text{r}}) ≥ f(p_d)$.
    1. If $f(p_{\text{r}}) < f(p_{d+1})$ set the step $s = -ρ$
    2. otherwise set $s=ρ$.
    Compute the contraction point $p_{\text{c}} = \operatorname{retr}_{p_{\text{m}}}\bigl(s\operatorname{retr}^{-1}_{p_{\text{m}}} p_{d+1} \bigr)$.
    1. in this case if $f(p_{\text{c}}) < f(p_{\text{r}})$ set $p_{d+1} = p_{\text{c}}$ and go to step 1
    2. in this case if $f(p_{\text{c}}) < f(p_{d+1})$ set $p_{d+1} = p_{\text{c}}$ and go to step 1
  6. Shrink all points (closer to $p_1$). For all $i=2,...,d+1$ set $p_{i} = \operatorname{retr}_{p_{1}}\bigl( σ\operatorname{retr}^{-1}_{p_{1}} p_{i} \bigr).$

For more details, see The Euclidean variant in the Wikipedia https://en.wikipedia.org/wiki/Nelder-Mead_method or Algorithm 4.1 in http://www.optimization-online.org/DB_FILE/2007/08/1742.pdf.

Input

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.NelderMead!Function
NelderMead(M::AbstractManifold, f, population=NelderMeadSimplex(M))
+NelderMead!(M::AbstractManifold, mco::AbstractManifoldCostObjective, population)

Solve a Nelder-Mead minimization problem for the cost function $f: \mathcal M → ℝ$ on the manifold M. If the initial NelderMeadSimplex is not provided, a random set of points is chosen. The compuation can be performed in-place of the population.

The algorithm consists of the following steps. Let $d$ denote the dimension of the manifold $\mathcal M$.

  1. Order the simplex vertices $p_i, i=1,…,d+1$ by increasing cost, such that we have $f(p_1) ≤ f(p_2) ≤ … ≤ f(p_{d+1})$.
  2. Compute the Riemannian center of mass [Kar77], cf. mean, $p_{\text{m}}$ of the simplex vertices $p_1,…,p_{d+1}$.
  3. Reflect the point with the worst point at the mean $p_{\text{r}} = \operatorname{retr}_{p_{\text{m}}}\bigl( - α\operatorname{retr}^{-1}_{p_{\text{m}}} (p_{d+1}) \bigr)$ If $f(p_1) ≤ f(p_{\text{r}}) ≤ f(p_{d})$ then set $p_{d+1} = p_{\text{r}}$ and go to step 1.
  4. Expand the simplex if $f(p_{\text{r}}) < f(p_1)$ by computing the expantion point $p_{\text{e}} = \operatorname{retr}_{p_{\text{m}}}\bigl( - γα\operatorname{retr}^{-1}_{p_{\text{m}}} (p_{d+1}) \bigr)$, which in this formulation allows to reuse the tangent vector from the inverse retraction from before. If $f(p_{\text{e}}) < f(p_{\text{r}})$ then set $p_{d+1} = p_{\text{e}}$ otherwise set set $p_{d+1} = p_{\text{r}}$. Then go to Step 1.
  5. Contract the simplex if $f(p_{\text{r}}) ≥ f(p_d)$.
    1. If $f(p_{\text{r}}) < f(p_{d+1})$ set the step $s = -ρ$
    2. otherwise set $s=ρ$.
    Compute the contraction point $p_{\text{c}} = \operatorname{retr}_{p_{\text{m}}}\bigl(s\operatorname{retr}^{-1}_{p_{\text{m}}} p_{d+1} \bigr)$.
    1. in this case if $f(p_{\text{c}}) < f(p_{\text{r}})$ set $p_{d+1} = p_{\text{c}}$ and go to step 1
    2. in this case if $f(p_{\text{c}}) < f(p_{d+1})$ set $p_{d+1} = p_{\text{c}}$ and go to step 1
  6. Shrink all points (closer to $p_1$). For all $i=2,...,d+1$ set $p_{i} = \operatorname{retr}_{p_{1}}\bigl( σ\operatorname{retr}^{-1}_{p_{1}} p_{i} \bigr).$

For more details, see The Euclidean variant in the Wikipedia https://en.wikipedia.org/wiki/Nelder-Mead_method or Algorithm 4.1 in http://www.optimization-online.org/DB_FILE/2007/08/1742.pdf.

Input

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.NelderMead!Function
NelderMead(M::AbstractManifold, f, population=NelderMeadSimplex(M))
 NelderMead(M::AbstractManifold, mco::AbstractManifoldCostObjective, population=NelderMeadSimplex(M))
 NelderMead!(M::AbstractManifold, f, population)
-NelderMead!(M::AbstractManifold, mco::AbstractManifoldCostObjective, population)

Solve a Nelder-Mead minimization problem for the cost function $f: \mathcal M → ℝ$ on the manifold M. If the initial NelderMeadSimplex is not provided, a random set of points is chosen. The compuation can be performed in-place of the population.

The algorithm consists of the following steps. Let $d$ denote the dimension of the manifold $\mathcal M$.

  1. Order the simplex vertices $p_i, i=1,…,d+1$ by increasing cost, such that we have $f(p_1) ≤ f(p_2) ≤ … ≤ f(p_{d+1})$.
  2. Compute the Riemannian center of mass [Kar77], cf. mean, $p_{\text{m}}$ of the simplex vertices $p_1,…,p_{d+1}$.
  3. Reflect the point with the worst point at the mean $p_{\text{r}} = \operatorname{retr}_{p_{\text{m}}}\bigl( - α\operatorname{retr}^{-1}_{p_{\text{m}}} (p_{d+1}) \bigr)$ If $f(p_1) ≤ f(p_{\text{r}}) ≤ f(p_{d})$ then set $p_{d+1} = p_{\text{r}}$ and go to step 1.
  4. Expand the simplex if $f(p_{\text{r}}) < f(p_1)$ by computing the expantion point $p_{\text{e}} = \operatorname{retr}_{p_{\text{m}}}\bigl( - γα\operatorname{retr}^{-1}_{p_{\text{m}}} (p_{d+1}) \bigr)$, which in this formulation allows to reuse the tangent vector from the inverse retraction from before. If $f(p_{\text{e}}) < f(p_{\text{r}})$ then set $p_{d+1} = p_{\text{e}}$ otherwise set set $p_{d+1} = p_{\text{r}}$. Then go to Step 1.
  5. Contract the simplex if $f(p_{\text{r}}) ≥ f(p_d)$.
    1. If $f(p_{\text{r}}) < f(p_{d+1})$ set the step $s = -ρ$
    2. otherwise set $s=ρ$.
    Compute the contraction point $p_{\text{c}} = \operatorname{retr}_{p_{\text{m}}}\bigl(s\operatorname{retr}^{-1}_{p_{\text{m}}} p_{d+1} \bigr)$.
    1. in this case if $f(p_{\text{c}}) < f(p_{\text{r}})$ set $p_{d+1} = p_{\text{c}}$ and go to step 1
    2. in this case if $f(p_{\text{c}}) < f(p_{d+1})$ set $p_{d+1} = p_{\text{c}}$ and go to step 1
  6. Shrink all points (closer to $p_1$). For all $i=2,...,d+1$ set $p_{i} = \operatorname{retr}_{p_{1}}\bigl( σ\operatorname{retr}^{-1}_{p_{1}} p_{i} \bigr).$

For more details, see The Euclidean variant in the Wikipedia https://en.wikipedia.org/wiki/Nelder-Mead_method or Algorithm 4.1 in http://www.optimization-online.org/DB_FILE/2007/08/1742.pdf.

Input

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.NelderMeadStateType
NelderMeadState <: AbstractManoptSolverState

Describes all parameters and the state of a Nelder-Mead heuristic based optimization algorithm.

Fields

The naming of these parameters follows the Wikipedia article of the Euclidean case. The default is given in brackets, the required value range after the description

  • population::NelderMeadSimplex: a population (set) of $d+1$ points $x_i$, $i=1,…,n+1$, where $d$ is the manifold_dimension of M.
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • α: the reflection parameter $α > 0$:
  • γ the expansion parameter $γ > 0$:
  • ρ: the contraction parameter, $0 < ρ ≤ \frac{1}{2}$,
  • σ: the shrinkage coefficient, $0 < σ ≤ 1$
  • p::P: a point on the manifold $\mathcal M$ storing the current best point
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions

Constructors

NelderMeadState(M::AbstractManifold; kwargs...)

Construct a Nelder-Mead Option with a default population (if not provided) of set of dimension(M)+1 random points stored in NelderMeadSimplex.

Keyword arguments

source

Simplex

Manopt.NelderMeadSimplexType
NelderMeadSimplex

A simplex for the Nelder-Mead algorithm.

Constructors

NelderMeadSimplex(M::AbstractManifold)

Construct a simplex using $d+1$ random points from manifold M, where $d$ is the manifold_dimension of M.

NelderMeadSimplex(
+NelderMead!(M::AbstractManifold, mco::AbstractManifoldCostObjective, population)

Solve a Nelder-Mead minimization problem for the cost function $f: \mathcal M → ℝ$ on the manifold M. If the initial NelderMeadSimplex is not provided, a random set of points is chosen. The compuation can be performed in-place of the population.

The algorithm consists of the following steps. Let $d$ denote the dimension of the manifold $\mathcal M$.

  1. Order the simplex vertices $p_i, i=1,…,d+1$ by increasing cost, such that we have $f(p_1) ≤ f(p_2) ≤ … ≤ f(p_{d+1})$.
  2. Compute the Riemannian center of mass [Kar77], cf. mean, $p_{\text{m}}$ of the simplex vertices $p_1,…,p_{d+1}$.
  3. Reflect the point with the worst point at the mean $p_{\text{r}} = \operatorname{retr}_{p_{\text{m}}}\bigl( - α\operatorname{retr}^{-1}_{p_{\text{m}}} (p_{d+1}) \bigr)$ If $f(p_1) ≤ f(p_{\text{r}}) ≤ f(p_{d})$ then set $p_{d+1} = p_{\text{r}}$ and go to step 1.
  4. Expand the simplex if $f(p_{\text{r}}) < f(p_1)$ by computing the expantion point $p_{\text{e}} = \operatorname{retr}_{p_{\text{m}}}\bigl( - γα\operatorname{retr}^{-1}_{p_{\text{m}}} (p_{d+1}) \bigr)$, which in this formulation allows to reuse the tangent vector from the inverse retraction from before. If $f(p_{\text{e}}) < f(p_{\text{r}})$ then set $p_{d+1} = p_{\text{e}}$ otherwise set set $p_{d+1} = p_{\text{r}}$. Then go to Step 1.
  5. Contract the simplex if $f(p_{\text{r}}) ≥ f(p_d)$.
    1. If $f(p_{\text{r}}) < f(p_{d+1})$ set the step $s = -ρ$
    2. otherwise set $s=ρ$.
    Compute the contraction point $p_{\text{c}} = \operatorname{retr}_{p_{\text{m}}}\bigl(s\operatorname{retr}^{-1}_{p_{\text{m}}} p_{d+1} \bigr)$.
    1. in this case if $f(p_{\text{c}}) < f(p_{\text{r}})$ set $p_{d+1} = p_{\text{c}}$ and go to step 1
    2. in this case if $f(p_{\text{c}}) < f(p_{d+1})$ set $p_{d+1} = p_{\text{c}}$ and go to step 1
  6. Shrink all points (closer to $p_1$). For all $i=2,...,d+1$ set $p_{i} = \operatorname{retr}_{p_{1}}\bigl( σ\operatorname{retr}^{-1}_{p_{1}} p_{i} \bigr).$

For more details, see The Euclidean variant in the Wikipedia https://en.wikipedia.org/wiki/Nelder-Mead_method or Algorithm 4.1 in http://www.optimization-online.org/DB_FILE/2007/08/1742.pdf.

Input

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.NelderMeadStateType
NelderMeadState <: AbstractManoptSolverState

Describes all parameters and the state of a Nelder-Mead heuristic based optimization algorithm.

Fields

The naming of these parameters follows the Wikipedia article of the Euclidean case. The default is given in brackets, the required value range after the description

  • population::NelderMeadSimplex: a population (set) of $d+1$ points $x_i$, $i=1,…,n+1$, where $d$ is the manifold_dimension of M.
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • α: the reflection parameter $α > 0$:
  • γ the expansion parameter $γ > 0$:
  • ρ: the contraction parameter, $0 < ρ ≤ \frac{1}{2}$,
  • σ: the shrinkage coefficient, $0 < σ ≤ 1$
  • p::P: a point on the manifold $\mathcal M$ storing the current best point
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions

Constructors

NelderMeadState(M::AbstractManifold; kwargs...)

Construct a Nelder-Mead Option with a default population (if not provided) of set of dimension(M)+1 random points stored in NelderMeadSimplex.

Keyword arguments

source

Simplex

Manopt.NelderMeadSimplexType
NelderMeadSimplex

A simplex for the Nelder-Mead algorithm.

Constructors

NelderMeadSimplex(M::AbstractManifold)

Construct a simplex using $d+1$ random points from manifold M, where $d$ is the manifold_dimension of M.

NelderMeadSimplex(
     M::AbstractManifold,
     p,
     B::AbstractBasis=DefaultOrthonormalBasis();
     a::Real=0.025,
     retraction_method::AbstractRetractionMethod=default_retraction_method(M, typeof(p)),
-)

Construct a simplex from a basis B with one point being p and other points constructed by moving by a in each principal direction defined by basis B of the tangent space at point p using retraction retraction_method. This works similarly to how the initial simplex is constructed in the Euclidean Nelder-Mead algorithm, just in the tangent space at point p.

source

Additional stopping criteria

Manopt.StopWhenPopulationConcentratedType
StopWhenPopulationConcentrated <: StoppingCriterion

A stopping criterion for NelderMead to indicate to stop when both

  • the maximal distance of the first to the remaining the cost values and
  • the maximal distance of the first to the remaining the population points

drops below a certain tolerance tol_f and tol_p, respectively.

Constructor

StopWhenPopulationConcentrated(tol_f::Real=1e-8, tol_x::Real=1e-8)
source

Technical details

The NelderMead solver requires the following functions of a manifold to be available

+)

Construct a simplex from a basis B with one point being p and other points constructed by moving by a in each principal direction defined by basis B of the tangent space at point p using retraction retraction_method. This works similarly to how the initial simplex is constructed in the Euclidean Nelder-Mead algorithm, just in the tangent space at point p.

source

Additional stopping criteria

Manopt.StopWhenPopulationConcentratedType
StopWhenPopulationConcentrated <: StoppingCriterion

A stopping criterion for NelderMead to indicate to stop when both

  • the maximal distance of the first to the remaining the cost values and
  • the maximal distance of the first to the remaining the population points

drops below a certain tolerance tol_f and tol_p, respectively.

Constructor

StopWhenPopulationConcentrated(tol_f::Real=1e-8, tol_x::Real=1e-8)
source

Technical details

The NelderMead solver requires the following functions of a manifold to be available

diff --git a/dev/solvers/adaptive-regularization-with-cubics/index.html b/dev/solvers/adaptive-regularization-with-cubics/index.html index d6d04e12eb..10d034d5f2 100644 --- a/dev/solvers/adaptive-regularization-with-cubics/index.html +++ b/dev/solvers/adaptive-regularization-with-cubics/index.html @@ -9,7 +9,7 @@ \max\{σ_{\min}, γ_1σ_k\} & \text{ if } ρ \geq η_2 &\text{ (the model was very successful)},\\ σ_k & \text{ if } ρ ∈ [η_1, η_2)&\text{ (the model was successful)},\\ γ_2σ_k & \text{ if } ρ < η_1&\text{ (the model was unsuccessful)}. -\end{cases}\]

For more details see [ABBC20].

Input

the cost f and its gradient and Hessian might also be provided as a ManifoldHessianObjective

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

If you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.adaptive_regularization_with_cubics!Function
adaptive_regularization_with_cubics(M, f, grad_f, Hess_f, p=rand(M); kwargs...)
+\end{cases}\]

For more details see [ABBC20].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$

the cost f and its gradient and Hessian might also be provided as a ManifoldHessianObjective

Keyword arguments

  • σ=100.0 / sqrt(manifold_dimension(M): initial regularization parameter
  • σmin=1e-10: minimal regularization value $σ_{\min}$
  • η1=0.1: lower model success threshold
  • η2=0.9: upper model success threshold
  • γ1=0.1: regularization reduction factor (for the success case)
  • γ2=2.0: regularization increment factor (for the non-success case)
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • initial_tangent_vector=zero_vector(M, p): initialize any tangent vector data,
  • maxIterLanczos=200: a shortcut to set the stopping criterion in the sub solver,
  • ρ_regularization=1e3: a regularization to avoid dividing by zero for small values of cost and model
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions:
  • stopping_criterion=StopAfterIteration(40)|StopWhenGradientNormLess(1e-9)|StopWhenAllLanczosVectorsUsed(maxIterLanczos): a functor indicating that the stopping criterion is fulfilled
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_objective=nothing: a shortcut to modify the objective of the subproblem used within in the sub_problem= keyword By default, this is initialized as a AdaptiveRagularizationWithCubicsModelObjective, which can further be decorated by using the sub_kwargs= keyword.
  • sub_state=LanczosState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

If you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.adaptive_regularization_with_cubics!Function
adaptive_regularization_with_cubics(M, f, grad_f, Hess_f, p=rand(M); kwargs...)
 adaptive_regularization_with_cubics(M, f, grad_f, p=rand(M); kwargs...)
 adaptive_regularization_with_cubics(M, mho, p=rand(M); kwargs...)
 adaptive_regularization_with_cubics!(M, f, grad_f, Hess_f, p; kwargs...)
@@ -19,8 +19,8 @@
     \max\{σ_{\min}, γ_1σ_k\} & \text{ if } ρ \geq η_2 &\text{   (the model was very successful)},\\
     σ_k & \text{ if } ρ ∈ [η_1, η_2)&\text{   (the model was successful)},\\
     γ_2σ_k & \text{ if } ρ < η_1&\text{   (the model was unsuccessful)}.
-\end{cases}\]

For more details see [ABBC20].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$

the cost f and its gradient and Hessian might also be provided as a ManifoldHessianObjective

Keyword arguments

  • σ=100.0 / sqrt(manifold_dimension(M): initial regularization parameter
  • σmin=1e-10: minimal regularization value $σ_{\min}$
  • η1=0.1: lower model success threshold
  • η2=0.9: upper model success threshold
  • γ1=0.1: regularization reduction factor (for the success case)
  • γ2=2.0: regularization increment factor (for the non-success case)
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • initial_tangent_vector=zero_vector(M, p): initialize any tangent vector data,
  • maxIterLanczos=200: a shortcut to set the stopping criterion in the sub solver,
  • ρ_regularization=1e3: a regularization to avoid dividing by zero for small values of cost and model
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions:
  • stopping_criterion=StopAfterIteration(40)|StopWhenGradientNormLess(1e-9)|StopWhenAllLanczosVectorsUsed(maxIterLanczos): a functor indicating that the stopping criterion is fulfilled
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_objective=nothing: a shortcut to modify the objective of the subproblem used within in the sub_problem= keyword By default, this is initialized as a AdaptiveRagularizationWithCubicsModelObjective, which can further be decorated by using the sub_kwargs= keyword.
  • sub_state=LanczosState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

If you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.AdaptiveRegularizationStateType
AdaptiveRegularizationState{P,T} <: AbstractHessianSolverState

A state for the adaptive_regularization_with_cubics solver.

Fields

  • η1, η1: bounds for evaluating the regularization parameter
  • γ1, γ2: shrinking and expansion factors for regularization parameter σ
  • H: the current Hessian evaluation
  • s: the current solution from the subsolver
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • q: a point for the candidates to evaluate model and ρ
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate
  • s: the tangent vector step resulting from minimizing the model problem in the tangent space $T_{p}\mathcal M$
  • σ: the current cubic regularization parameter
  • σmin: lower bound for the cubic regularization parameter
  • ρ_regularization: regularization parameter for computing ρ. When approaching convergence ρ may be difficult to compute with numerator and denominator approaching zero. Regularizing the ratio lets ρ go to 1 near convergence.
  • ρ: the current regularized ratio of actual improvement and model improvement.
  • ρ_denominator: a value to store the denominator from the computation of ρ to allow for a warning or error when this value is non-positive.
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Furthermore the following integral fields are defined

Constructor

AdaptiveRegularizationState(M, sub_problem, sub_state; kwargs...)

Construct the solver state with all fields stated as keyword arguments and the following defaults

Keyword arguments

  • η1=0.1
  • η2=0.9
  • γ1=0.1
  • γ2=2.0
  • σ=100/manifold_dimension(M)
  • `σmin=1e-7
  • ρ_regularization=1e3
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • p=rand(M): a point on the manifold $\mathcal M$
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(100): a functor indicating that the stopping criterion is fulfilled
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$
source

Sub solvers

There are several ways to approach the subsolver. The default is the first one.

Lanczos iteration

Manopt.LanczosStateType
LanczosState{P,T,SC,B,I,R,TM,V,Y} <: AbstractManoptSolverState

Solve the adaptive regularized subproblem with a Lanczos iteration

Fields

  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • stop_newton::StoppingCriterion: a functor indicating that the stopping criterion is fulfilledused for the inner Newton iteration
  • σ: the current regularization parameter
  • X: the Iterate
  • Lanczos_vectors: the obtained Lanczos vectors
  • tridig_matrix: the tridiagonal coefficient matrix T
  • coefficients: the coefficients $y_1,...y_k$ that determine the solution
  • Hp: a temporary tangent vector containing the evaluation of the Hessian
  • Hp_residual: a temporary tangent vector containing the residual to the Hessian
  • S: the current obtained / approximated solution

Constructor

LanczosState(TpM::TangentSpace; kwargs...)

Keyword arguments

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$as the iterate
  • maxIterLanzcos=200: shortcut to set the maximal number of iterations in the stopping_crtierion=
  • θ=0.5: set the parameter in the StopWhenFirstOrderProgress within the default stopping_criterion=.
  • stopping_criterion=StopAfterIteration(maxIterLanczos)|StopWhenFirstOrderProgress(θ): a functor indicating that the stopping criterion is fulfilled
  • stopping_criterion_newton=StopAfterIteration(200): a functor indicating that the stopping criterion is fulfilled used for the inner Newton iteration
  • σ=10.0: specify the regularization parameter
source

(Conjugate) gradient descent

There is a generic objective, that implements the sub problem

Manopt.AdaptiveRagularizationWithCubicsModelObjectiveType
AdaptiveRagularizationWithCubicsModelObjective

A model for the adaptive regularization with Cubics

\[m(X) = f(p) + ⟨\operatorname{grad} f(p), X ⟩_p + \frac{1}{2} ⟨\operatorname{Hess} f(p)[X], X⟩_p - + \frac{σ}{3} \lVert X \rVert^3,\]

cf. Eq. (33) in [ABBC20]

Fields

Constructors

AdaptiveRagularizationWithCubicsModelObjective(mho, σ=1.0)

with either an AbstractManifoldHessianObjective objective or an decorator containing such an objective.

source

Since the sub problem is given on the tangent space, you have to provide

arc_obj = AdaptiveRagularizationWithCubicsModelObjective(mho, σ)
-sub_problem = DefaultProblem(TangentSpaceAt(M,p), arc_obj)

where mho is the Hessian objective of f to solve. Then use this for the sub_problem keyword and use your favourite gradient based solver for the sub_state keyword, for example a ConjugateGradientDescentState

Additional stopping criteria

Manopt.StopWhenAllLanczosVectorsUsedType
StopWhenAllLanczosVectorsUsed <: StoppingCriterion

When an inner iteration has used up all Lanczos vectors, then this stopping criterion is a fallback / security stopping criterion to not access a non-existing field in the array allocated for vectors.

Note that this stopping criterion (for now) is only implemented for the case that an AdaptiveRegularizationState when using a LanczosState subsolver

Fields

  • maxLanczosVectors: maximal number of Lanczos vectors
  • at_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.

Constructor

StopWhenAllLanczosVectorsUsed(maxLancosVectors::Int)
source
Manopt.StopWhenFirstOrderProgressType
StopWhenFirstOrderProgress <: StoppingCriterion

A stopping criterion related to the Riemannian adaptive regularization with cubics (ARC) solver indicating that the model function at the current (outer) iterate,

\[m_k(X) = f(p_k) + ⟨X, \operatorname{grad} f(p^{(k)})⟩ + \frac{1}{2}⟨X, \operatorname{Hess} f(p^{(k)})[X]⟩ + \frac{σ_k}{3}\lVert X \rVert^3\]

defined on the tangent space $T_{p}\mathcal M$ fulfills at the current iterate $X_k$ that

\[m(X_k) \leq m(0) +\end{cases}\]

For more details see [ABBC20].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$

the cost f and its gradient and Hessian might also be provided as a ManifoldHessianObjective

Keyword arguments

  • σ=100.0 / sqrt(manifold_dimension(M): initial regularization parameter
  • σmin=1e-10: minimal regularization value $σ_{\min}$
  • η1=0.1: lower model success threshold
  • η2=0.9: upper model success threshold
  • γ1=0.1: regularization reduction factor (for the success case)
  • γ2=2.0: regularization increment factor (for the non-success case)
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • initial_tangent_vector=zero_vector(M, p): initialize any tangent vector data,
  • maxIterLanczos=200: a shortcut to set the stopping criterion in the sub solver,
  • ρ_regularization=1e3: a regularization to avoid dividing by zero for small values of cost and model
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions:
  • stopping_criterion=StopAfterIteration(40)|StopWhenGradientNormLess(1e-9)|StopWhenAllLanczosVectorsUsed(maxIterLanczos): a functor indicating that the stopping criterion is fulfilled
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_objective=nothing: a shortcut to modify the objective of the subproblem used within in the sub_problem= keyword By default, this is initialized as a AdaptiveRagularizationWithCubicsModelObjective, which can further be decorated by using the sub_kwargs= keyword.
  • sub_state=LanczosState(M, copy(M,p)): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

If you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.AdaptiveRegularizationStateType
AdaptiveRegularizationState{P,T} <: AbstractHessianSolverState

A state for the adaptive_regularization_with_cubics solver.

Fields

  • η1, η1: bounds for evaluating the regularization parameter
  • γ1, γ2: shrinking and expansion factors for regularization parameter σ
  • H: the current Hessian evaluation
  • s: the current solution from the subsolver
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • q: a point for the candidates to evaluate model and ρ
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate
  • s: the tangent vector step resulting from minimizing the model problem in the tangent space $T_{p}\mathcal M$
  • σ: the current cubic regularization parameter
  • σmin: lower bound for the cubic regularization parameter
  • ρ_regularization: regularization parameter for computing ρ. When approaching convergence ρ may be difficult to compute with numerator and denominator approaching zero. Regularizing the ratio lets ρ go to 1 near convergence.
  • ρ: the current regularized ratio of actual improvement and model improvement.
  • ρ_denominator: a value to store the denominator from the computation of ρ to allow for a warning or error when this value is non-positive.
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Furthermore the following integral fields are defined

Constructor

AdaptiveRegularizationState(M, sub_problem, sub_state; kwargs...)

Construct the solver state with all fields stated as keyword arguments and the following defaults

Keyword arguments

  • η1=0.1
  • η2=0.9
  • γ1=0.1
  • γ2=2.0
  • σ=100/manifold_dimension(M)
  • `σmin=1e-7
  • ρ_regularization=1e3
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • p=rand(M): a point on the manifold $\mathcal M$
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(100): a functor indicating that the stopping criterion is fulfilled
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$
source

Sub solvers

There are several ways to approach the subsolver. The default is the first one.

Lanczos iteration

Manopt.LanczosStateType
LanczosState{P,T,SC,B,I,R,TM,V,Y} <: AbstractManoptSolverState

Solve the adaptive regularized subproblem with a Lanczos iteration

Fields

  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • stop_newton::StoppingCriterion: a functor indicating that the stopping criterion is fulfilledused for the inner Newton iteration
  • σ: the current regularization parameter
  • X: the Iterate
  • Lanczos_vectors: the obtained Lanczos vectors
  • tridig_matrix: the tridiagonal coefficient matrix T
  • coefficients: the coefficients $y_1,...y_k$ that determine the solution
  • Hp: a temporary tangent vector containing the evaluation of the Hessian
  • Hp_residual: a temporary tangent vector containing the residual to the Hessian
  • S: the current obtained / approximated solution

Constructor

LanczosState(TpM::TangentSpace; kwargs...)

Keyword arguments

  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$as the iterate
  • maxIterLanzcos=200: shortcut to set the maximal number of iterations in the stopping_crtierion=
  • θ=0.5: set the parameter in the StopWhenFirstOrderProgress within the default stopping_criterion=.
  • stopping_criterion=StopAfterIteration(maxIterLanczos)|StopWhenFirstOrderProgress(θ): a functor indicating that the stopping criterion is fulfilled
  • stopping_criterion_newton=StopAfterIteration(200): a functor indicating that the stopping criterion is fulfilled used for the inner Newton iteration
  • σ=10.0: specify the regularization parameter
source

(Conjugate) gradient descent

There is a generic objective, that implements the sub problem

Manopt.AdaptiveRagularizationWithCubicsModelObjectiveType
AdaptiveRagularizationWithCubicsModelObjective

A model for the adaptive regularization with Cubics

\[m(X) = f(p) + ⟨\operatorname{grad} f(p), X ⟩_p + \frac{1}{2} ⟨\operatorname{Hess} f(p)[X], X⟩_p + + \frac{σ}{3} \lVert X \rVert^3,\]

cf. Eq. (33) in [ABBC20]

Fields

Constructors

AdaptiveRagularizationWithCubicsModelObjective(mho, σ=1.0)

with either an AbstractManifoldHessianObjective objective or an decorator containing such an objective.

source

Since the sub problem is given on the tangent space, you have to provide

arc_obj = AdaptiveRagularizationWithCubicsModelObjective(mho, σ)
+sub_problem = DefaultProblem(TangentSpaceAt(M,p), arc_obj)

where mho is the Hessian objective of f to solve. Then use this for the sub_problem keyword and use your favourite gradient based solver for the sub_state keyword, for example a ConjugateGradientDescentState

Additional stopping criteria

Manopt.StopWhenAllLanczosVectorsUsedType
StopWhenAllLanczosVectorsUsed <: StoppingCriterion

When an inner iteration has used up all Lanczos vectors, then this stopping criterion is a fallback / security stopping criterion to not access a non-existing field in the array allocated for vectors.

Note that this stopping criterion (for now) is only implemented for the case that an AdaptiveRegularizationState when using a LanczosState subsolver

Fields

  • maxLanczosVectors: maximal number of Lanczos vectors
  • at_iteration indicates at which iteration (including i=0) the stopping criterion was fulfilled and is -1 while it is not fulfilled.

Constructor

StopWhenAllLanczosVectorsUsed(maxLancosVectors::Int)
source
Manopt.StopWhenFirstOrderProgressType
StopWhenFirstOrderProgress <: StoppingCriterion

A stopping criterion related to the Riemannian adaptive regularization with cubics (ARC) solver indicating that the model function at the current (outer) iterate,

\[m_k(X) = f(p_k) + ⟨X, \operatorname{grad} f(p^{(k)})⟩ + \frac{1}{2}⟨X, \operatorname{Hess} f(p^{(k)})[X]⟩ + \frac{σ_k}{3}\lVert X \rVert^3\]

defined on the tangent space $T_{p}\mathcal M$ fulfills at the current iterate $X_k$ that

\[m(X_k) \leq m(0) \quad\text{ and }\quad -\lVert \operatorname{grad} m(X_k) \rVert ≤ θ \lVert X_k \rVert^2\]

Fields

  • θ: the factor $θ$ in the second condition
  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;

Constructor

StopWhenAllLanczosVectorsUsed(θ)
source

Technical details

The adaptive_regularization_with_cubics requires the following functions of a manifolds to be available

Furthermore, within the Lanczos subsolver, generating a random vector (at p) using rand!(M, X; vector_at=p) in place of X is required

Literature

[ABBC20]
N. Agarwal, N. Boumal, B. Bullins and C. Cartis. Adaptive regularization with cubics on manifolds. Mathematical Programming (2020).
+\lVert \operatorname{grad} m(X_k) \rVert ≤ θ \lVert X_k \rVert^2\]

Fields

  • θ: the factor $θ$ in the second condition
  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;

Constructor

StopWhenAllLanczosVectorsUsed(θ)
source

Technical details

The adaptive_regularization_with_cubics requires the following functions of a manifolds to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.
  • if you do not provide an initial regularization parameter σ, a manifold_dimension is required.
  • By default the tangent vector storing the gradient is initialized calling zero_vector(M,p).
  • inner(M, p, X, Y) is used within the algorithm step

Furthermore, within the Lanczos subsolver, generating a random vector (at p) using rand!(M, X; vector_at=p) in place of X is required

Literature

[ABBC20]
N. Agarwal, N. Boumal, B. Bullins and C. Cartis. Adaptive regularization with cubics on manifolds. Mathematical Programming (2020).
diff --git a/dev/solvers/alternating_gradient_descent/index.html b/dev/solvers/alternating_gradient_descent/index.html index 20747f4ce5..bd948c2d2c 100644 --- a/dev/solvers/alternating_gradient_descent/index.html +++ b/dev/solvers/alternating_gradient_descent/index.html @@ -2,8 +2,8 @@ Alternating Gradient Descent · Manopt.jl

Alternating gradient descent

Manopt.alternating_gradient_descentFunction
alternating_gradient_descent(M::ProductManifold, f, grad_f, p=rand(M))
 alternating_gradient_descent(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)
 alternating_gradient_descent!(M::ProductManifold, f, grad_f, p)
-alternating_gradient_descent!(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)

perform an alternating gradient descent. This can be done in-place of the start point p

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: a gradient, that can be of two cases
    • is a single function returning an ArrayPartition from RecursiveArrayTools.jl or
    • is a vector functions each returning a component part of the whole gradient
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • evaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom), a per cycle permuted sequence (:Random) or the default :Linear one.
  • inner_iterations=5: how many gradient steps to take in a component before alternating to the next
  • stopping_criterion=StopAfterIteration(1000)): a functor indicating that the stopping criterion is fulfilled
  • stepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size
  • order=[1:n]: the initial permutation, where n is the number of gradients in gradF.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions

Output

usually the obtained (approximate) minimizer, see get_solver_return for details

Note

The input of each of the (component) gradients is still the whole vector X, just that all other then the ith input component are assumed to be fixed and just the ith components gradient is computed / returned.

source
Manopt.alternating_gradient_descent!Function
alternating_gradient_descent(M::ProductManifold, f, grad_f, p=rand(M))
+alternating_gradient_descent!(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)

perform an alternating gradient descent. This can be done in-place of the start point p

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: a gradient, that can be of two cases
    • is a single function returning an ArrayPartition from RecursiveArrayTools.jl or
    • is a vector functions each returning a component part of the whole gradient
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • evaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom), a per cycle permuted sequence (:Random) or the default :Linear one.
  • inner_iterations=5: how many gradient steps to take in a component before alternating to the next
  • stopping_criterion=StopAfterIteration(1000)): a functor indicating that the stopping criterion is fulfilled
  • stepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size
  • order=[1:n]: the initial permutation, where n is the number of gradients in gradF.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions

Output

usually the obtained (approximate) minimizer, see get_solver_return for details

Note

The input of each of the (component) gradients is still the whole vector X, just that all other then the ith input component are assumed to be fixed and just the ith components gradient is computed / returned.

source
Manopt.alternating_gradient_descent!Function
alternating_gradient_descent(M::ProductManifold, f, grad_f, p=rand(M))
 alternating_gradient_descent(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)
 alternating_gradient_descent!(M::ProductManifold, f, grad_f, p)
-alternating_gradient_descent!(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)

perform an alternating gradient descent. This can be done in-place of the start point p

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: a gradient, that can be of two cases
    • is a single function returning an ArrayPartition from RecursiveArrayTools.jl or
    • is a vector functions each returning a component part of the whole gradient
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • evaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom), a per cycle permuted sequence (:Random) or the default :Linear one.
  • inner_iterations=5: how many gradient steps to take in a component before alternating to the next
  • stopping_criterion=StopAfterIteration(1000)): a functor indicating that the stopping criterion is fulfilled
  • stepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size
  • order=[1:n]: the initial permutation, where n is the number of gradients in gradF.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions

Output

usually the obtained (approximate) minimizer, see get_solver_return for details

Note

The input of each of the (component) gradients is still the whole vector X, just that all other then the ith input component are assumed to be fixed and just the ith components gradient is computed / returned.

source

State

Manopt.AlternatingGradientDescentStateType
AlternatingGradientDescentState <: AbstractGradientDescentSolverState

Store the fields for an alternating gradient descent algorithm, see also alternating_gradient_descent.

Fields

  • direction::DirectionUpdateRule
  • evaluation_order::Symbol: whether to use a randomly permuted sequence (:FixedRandom), a per cycle newly permuted sequence (:Random) or the default :Linear evaluation order.
  • inner_iterations: how many gradient steps to take in a component before alternating to the next
  • order: the current permutation
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate
  • k, ì`: internal counters for the outer and inner iterations, respectively.

Constructors

AlternatingGradientDescentState(M::AbstractManifold; kwargs...)

Keyword arguments

  • inner_iterations=5
  • p=rand(M): a point on the manifold $\mathcal M$
  • order_type::Symbol=:Linear
  • order::Vector{<:Int}=Int[]
  • stopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled
  • stepsize=default_stepsize(M, AlternatingGradientDescentState): a functor inheriting from Stepsize to determine a step size
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$

Generate the options for point p and where inner_iterations, order_type, order, retraction_method, stopping_criterion, and stepsize` are keyword arguments

source

Additionally, the options share a DirectionUpdateRule, which chooses the current component, so they can be decorated further; The most inner one should always be the following one though.

Manopt.AlternatingGradientFunction
AlternatingGradient(; kwargs...)
-AlternatingGradient(M::AbstractManifold; kwargs...)

Specify that a gradient based method should only update parts of the gradient in order to do a alternating gradient descent.

Keyword arguments

  • initial_gradient=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
Info

This function generates a ManifoldDefaultsFactory for AlternatingGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.AlternatingGradientRuleType
AlternatingGradientRule <: AbstractGradientGroupDirectionRule

Create a functor (problem, state k) -> (s,X) to evaluate the alternating gradient, that is alternating between the components of the gradient and has an field for partial evaluation of the gradient in-place.

Fields

  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$

Constructor

AlternatingGradientRule(M::AbstractManifold; p=rand(M), X=zero_vector(M, p))

Initialize the alternating gradient processor with tangent vector type of X, where both M and p are just help variables.

See also

alternating_gradient_descent, [AlternatingGradient])@ref)

source

which internally uses

Technical details

The alternating_gradient_descent solver requires the following functions of a manifold to be available

alternate between parts of the input.

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.
  • By default alternating gradient descent uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).
  • By default the tangent vector storing the gradient is initialized calling zero_vector(M,p).
+alternating_gradient_descent!(M::ProductManifold, ago::ManifoldAlternatingGradientObjective, p)

perform an alternating gradient descent. This can be done in-place of the start point p

Input

Keyword arguments

Output

usually the obtained (approximate) minimizer, see get_solver_return for details

Note

The input of each of the (component) gradients is still the whole vector X, just that all other then the ith input component are assumed to be fixed and just the ith components gradient is computed / returned.

source

State

Manopt.AlternatingGradientDescentStateType
AlternatingGradientDescentState <: AbstractGradientDescentSolverState

Store the fields for an alternating gradient descent algorithm, see also alternating_gradient_descent.

Fields

  • direction::DirectionUpdateRule
  • evaluation_order::Symbol: whether to use a randomly permuted sequence (:FixedRandom), a per cycle newly permuted sequence (:Random) or the default :Linear evaluation order.
  • inner_iterations: how many gradient steps to take in a component before alternating to the next
  • order: the current permutation
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate
  • k, ì`: internal counters for the outer and inner iterations, respectively.

Constructors

AlternatingGradientDescentState(M::AbstractManifold; kwargs...)

Keyword arguments

  • inner_iterations=5
  • p=rand(M): a point on the manifold $\mathcal M$
  • order_type::Symbol=:Linear
  • order::Vector{<:Int}=Int[]
  • stopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled
  • stepsize=default_stepsize(M, AlternatingGradientDescentState): a functor inheriting from Stepsize to determine a step size
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$

Generate the options for point p and where inner_iterations, order_type, order, retraction_method, stopping_criterion, and stepsize` are keyword arguments

source

Additionally, the options share a DirectionUpdateRule, which chooses the current component, so they can be decorated further; The most inner one should always be the following one though.

Manopt.AlternatingGradientFunction
AlternatingGradient(; kwargs...)
+AlternatingGradient(M::AbstractManifold; kwargs...)

Specify that a gradient based method should only update parts of the gradient in order to do a alternating gradient descent.

Keyword arguments

  • initial_gradient=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
Info

This function generates a ManifoldDefaultsFactory for AlternatingGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.AlternatingGradientRuleType
AlternatingGradientRule <: AbstractGradientGroupDirectionRule

Create a functor (problem, state k) -> (s,X) to evaluate the alternating gradient, that is alternating between the components of the gradient and has an field for partial evaluation of the gradient in-place.

Fields

  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$

Constructor

AlternatingGradientRule(M::AbstractManifold; p=rand(M), X=zero_vector(M, p))

Initialize the alternating gradient processor with tangent vector type of X, where both M and p are just help variables.

See also

alternating_gradient_descent, [AlternatingGradient])@ref)

source

which internally uses

Technical details

The alternating_gradient_descent solver requires the following functions of a manifold to be available

alternate between parts of the input.

diff --git a/dev/solvers/augmented_Lagrangian_method/index.html b/dev/solvers/augmented_Lagrangian_method/index.html index 0cca07a870..2aa7cb5987 100644 --- a/dev/solvers/augmented_Lagrangian_method/index.html +++ b/dev/solvers/augmented_Lagrangian_method/index.html @@ -9,7 +9,7 @@ \end{aligned}\]

where M is a Riemannian manifold, and $f$, $\{g_i\}_{i=1}^{n}$ and $\{h_j\}_{j=1}^{m}$ are twice continuously differentiable functions from M to ℝ. In every step $k$ of the algorithm, the AugmentedLagrangianCost $\mathcal L_{ρ^{(k)}}(p, μ^{(k)}, λ^{(k)})$ is minimized on \mathcal M, where $μ^{(k)} ∈ ℝ^n$ and $λ^{(k)} ∈ ℝ^m$ are the current iterates of the Lagrange multipliers and $ρ^{(k)}$ is the current penalty parameter.

The Lagrange multipliers are then updated by

\[λ_j^{(k+1)} =\operatorname{clip}_{[λ_{\min},λ_{\max}]} (λ_j^{(k)} + ρ^{(k)} h_j(p^{(k+1)})) \text{for all} j=1,…,p,\]

and

\[μ_i^{(k+1)} =\operatorname{clip}_{[0,μ_{\max}]} (μ_i^{(k)} + ρ^{(k)} g_i(p^{(k+1)})) \text{ for all } i=1,…,m,\]

where $λ_{\text{min}} ≤ λ_{\text{max}}$ and $μ_{\text{max}}$ are the multiplier boundaries.

Next, the accuracy tolerance $ϵ$ is updated as

\[ϵ^{(k)}=\max\{ϵ_{\min}, θ_ϵ ϵ^{(k-1)}\},\]

where $ϵ_{\text{min}}$ is the lowest value $ϵ$ is allowed to become and $θ_ϵ ∈ (0,1)$ is constant scaling factor.

Last, the penalty parameter $ρ$ is updated as follows: with

\[σ^{(k)}=\max_{j=1,…,p, i=1,…,m} \{\|h_j(p^{(k)})\|, \|\max_{i=1,…,m}\{g_i(p^{(k)}), -\frac{μ_i^{(k-1)}}{ρ^{(k-1)}} \}\| \}.\]

ρ is updated as

\[ρ^{(k)} = \begin{cases} ρ^{(k-1)}/θ_ρ, & \text{if } σ^{(k)}\leq θ_ρ σ^{(k-1)} ,\\ ρ^{(k-1)}, & \text{else,} -\end{cases}\]

where $θ_ρ ∈ (0,1)$ is a constant scaling factor.

Input

Optional (if not called with the ConstrainedManifoldObjective cmo)

Note that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.

Keyword Arguments

For the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.augmented_Lagrangian_method!Function
augmented_Lagrangian_method(M, f, grad_f, p=rand(M); kwargs...)
+\end{cases}\]

where $θ_ρ ∈ (0,1)$ is a constant scaling factor.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place

Optional (if not called with the ConstrainedManifoldObjective cmo)

  • g=nothing: the inequality constraints
  • h=nothing: the equality constraints
  • grad_g=nothing: the gradient of the inequality constraints
  • grad_h=nothing: the gradient of the equality constraints

Note that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.

Keyword Arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.

  • ϵ=1e-3: the accuracy tolerance

  • ϵ_min=1e-6: the lower bound for the accuracy tolerance

  • ϵ_exponent=1/100: exponent of the ϵ update factor; also 1/number of iterations until maximal accuracy is needed to end algorithm naturally

    • equality_constraints=nothing: the number $n$ of equality constraints.

    If not provided, a call to the gradient of g is performed to estimate these.

  • gradient_range=nothing: specify how both gradients of the constraints are represented

  • gradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.

  • gradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.

  • inequality_constraints=nothing: the number $m$ of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.

  • λ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the equality constraints

  • λ_max=20.0: an upper bound for the Lagrange multiplier belonging to the equality constraints

  • λ_min=- λ_max: a lower bound for the Lagrange multiplier belonging to the equality constraints

  • μ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the inequality constraints

  • μ_max=20.0: an upper bound for the Lagrange multiplier belonging to the inequality constraints

  • ρ=1.0: the penalty parameter

  • τ=0.8: factor for the improvement of the evaluation of the penalty parameter

  • θ_ρ=0.3: the scaling factor of the penalty parameter

  • θ_ϵ=(ϵ_min / ϵ)^(ϵ_exponent): the scaling factor of the exactness

  • sub_cost=[AugmentedLagrangianCost± (@ref)(cmo, ρ, μ, λ): use augmented Lagrangian cost, based on the ConstrainedManifoldObjective build from the functions provided. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.

  • sub_grad=[AugmentedLagrangianGrad](@ref)(cmo, ρ, μ, λ): use augmented Lagrangian gradient, based on the [ConstrainedManifoldObjective](@ref) build from the functions provided. This is used to define thesubproblem=keyword and has hence no effect, if you setsubproblem` directly.

  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.

  • stopping_criterion=StopAfterIteration(300)|(`StopWhenSmallerOrEqual(:ϵ, ϵ_min)&StopWhenChangeLess(1e-10) )[ | ](@ref StopWhenAny)[StopWhenChangeLess](@ref): a functor indicating that the stopping criterion is fulfilled

  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.

  • sub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.as the quasi newton method, the QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used.

  • `substoppingcriterion::StoppingCriterion=StopAfterIteration(300) | StopWhenGradientNormLess(ϵ) | StopWhenStepsizeLess(1e-8),

For the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.augmented_Lagrangian_method!Function
augmented_Lagrangian_method(M, f, grad_f, p=rand(M); kwargs...)
 augmented_Lagrangian_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)
 augmented_Lagrangian_method!(M, f, grad_f, p; kwargs...)
 augmented_Lagrangian_method!(M, cmo::ConstrainedManifoldObjective, p; kwargs...)

perform the augmented Lagrangian method (ALM) [LB19]. This method can work in-place of p.

The aim of the ALM is to find the solution of the constrained optimisation task

\[\begin{aligned} @@ -19,13 +19,13 @@ \end{aligned}\]

where M is a Riemannian manifold, and $f$, $\{g_i\}_{i=1}^{n}$ and $\{h_j\}_{j=1}^{m}$ are twice continuously differentiable functions from M to ℝ. In every step $k$ of the algorithm, the AugmentedLagrangianCost $\mathcal L_{ρ^{(k)}}(p, μ^{(k)}, λ^{(k)})$ is minimized on \mathcal M, where $μ^{(k)} ∈ ℝ^n$ and $λ^{(k)} ∈ ℝ^m$ are the current iterates of the Lagrange multipliers and $ρ^{(k)}$ is the current penalty parameter.

The Lagrange multipliers are then updated by

\[λ_j^{(k+1)} =\operatorname{clip}_{[λ_{\min},λ_{\max}]} (λ_j^{(k)} + ρ^{(k)} h_j(p^{(k+1)})) \text{for all} j=1,…,p,\]

and

\[μ_i^{(k+1)} =\operatorname{clip}_{[0,μ_{\max}]} (μ_i^{(k)} + ρ^{(k)} g_i(p^{(k+1)})) \text{ for all } i=1,…,m,\]

where $λ_{\text{min}} ≤ λ_{\text{max}}$ and $μ_{\text{max}}$ are the multiplier boundaries.

Next, the accuracy tolerance $ϵ$ is updated as

\[ϵ^{(k)}=\max\{ϵ_{\min}, θ_ϵ ϵ^{(k-1)}\},\]

where $ϵ_{\text{min}}$ is the lowest value $ϵ$ is allowed to become and $θ_ϵ ∈ (0,1)$ is constant scaling factor.

Last, the penalty parameter $ρ$ is updated as follows: with

\[σ^{(k)}=\max_{j=1,…,p, i=1,…,m} \{\|h_j(p^{(k)})\|, \|\max_{i=1,…,m}\{g_i(p^{(k)}), -\frac{μ_i^{(k-1)}}{ρ^{(k-1)}} \}\| \}.\]

ρ is updated as

\[ρ^{(k)} = \begin{cases} ρ^{(k-1)}/θ_ρ, & \text{if } σ^{(k)}\leq θ_ρ σ^{(k-1)} ,\\ ρ^{(k-1)}, & \text{else,} -\end{cases}\]

where $θ_ρ ∈ (0,1)$ is a constant scaling factor.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place

Optional (if not called with the ConstrainedManifoldObjective cmo)

  • g=nothing: the inequality constraints
  • h=nothing: the equality constraints
  • grad_g=nothing: the gradient of the inequality constraints
  • grad_h=nothing: the gradient of the equality constraints

Note that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.

Keyword Arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.

  • ϵ=1e-3: the accuracy tolerance

  • ϵ_min=1e-6: the lower bound for the accuracy tolerance

  • ϵ_exponent=1/100: exponent of the ϵ update factor; also 1/number of iterations until maximal accuracy is needed to end algorithm naturally

    • equality_constraints=nothing: the number $n$ of equality constraints.

    If not provided, a call to the gradient of g is performed to estimate these.

  • gradient_range=nothing: specify how both gradients of the constraints are represented

  • gradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.

  • gradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.

  • inequality_constraints=nothing: the number $m$ of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.

  • λ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the equality constraints

  • λ_max=20.0: an upper bound for the Lagrange multiplier belonging to the equality constraints

  • λ_min=- λ_max: a lower bound for the Lagrange multiplier belonging to the equality constraints

  • μ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the inequality constraints

  • μ_max=20.0: an upper bound for the Lagrange multiplier belonging to the inequality constraints

  • ρ=1.0: the penalty parameter

  • τ=0.8: factor for the improvement of the evaluation of the penalty parameter

  • θ_ρ=0.3: the scaling factor of the penalty parameter

  • θ_ϵ=(ϵ_min / ϵ)^(ϵ_exponent): the scaling factor of the exactness

  • sub_cost=[AugmentedLagrangianCost± (@ref)(cmo, ρ, μ, λ): use augmented Lagrangian cost, based on the ConstrainedManifoldObjective build from the functions provided. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.

  • sub_grad=[AugmentedLagrangianGrad](@ref)(cmo, ρ, μ, λ): use augmented Lagrangian gradient, based on the [ConstrainedManifoldObjective](@ref) build from the functions provided. This is used to define thesubproblem=keyword and has hence no effect, if you setsubproblem` directly.

  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.

  • stopping_criterion=StopAfterIteration(300)|(`StopWhenSmallerOrEqual(:ϵ, ϵ_min)&StopWhenChangeLess(1e-10) )[ | ](@ref StopWhenAny)[StopWhenChangeLess](@ref): a functor indicating that the stopping criterion is fulfilled

  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.

  • sub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.as the quasi newton method, the QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used.

  • `substoppingcriterion::StoppingCriterion=StopAfterIteration(300) | StopWhenGradientNormLess(ϵ) | StopWhenStepsizeLess(1e-8),

For the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.AugmentedLagrangianMethodStateType
AugmentedLagrangianMethodState{P,T} <: AbstractManoptSolverState

Describes the augmented Lagrangian method, with

Fields

a default value is given in brackets if a parameter can be left out in initialization.

  • ϵ: the accuracy tolerance
  • ϵ_min: the lower bound for the accuracy tolerance
  • λ: the Lagrange multiplier with respect to the equality constraints
  • λ_max: an upper bound for the Lagrange multiplier belonging to the equality constraints
  • λ_min: a lower bound for the Lagrange multiplier belonging to the equality constraints
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • penalty: evaluation of the current penalty term, initialized to Inf.
  • μ: the Lagrange multiplier with respect to the inequality constraints
  • μ_max: an upper bound for the Lagrange multiplier belonging to the inequality constraints
  • ρ: the penalty parameter
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • τ: factor for the improvement of the evaluation of the penalty parameter
  • θ_ρ: the scaling factor of the penalty parameter
  • θ_ϵ: the scaling factor of the accuracy tolerance
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled

Constructor

AugmentedLagrangianMethodState(M::AbstractManifold, co::ConstrainedManifoldObjective,
+\end{cases}\]

where $θ_ρ ∈ (0,1)$ is a constant scaling factor.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place

Optional (if not called with the ConstrainedManifoldObjective cmo)

  • g=nothing: the inequality constraints
  • h=nothing: the equality constraints
  • grad_g=nothing: the gradient of the inequality constraints
  • grad_h=nothing: the gradient of the equality constraints

Note that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.

Keyword Arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.

  • ϵ=1e-3: the accuracy tolerance

  • ϵ_min=1e-6: the lower bound for the accuracy tolerance

  • ϵ_exponent=1/100: exponent of the ϵ update factor; also 1/number of iterations until maximal accuracy is needed to end algorithm naturally

    • equality_constraints=nothing: the number $n$ of equality constraints.

    If not provided, a call to the gradient of g is performed to estimate these.

  • gradient_range=nothing: specify how both gradients of the constraints are represented

  • gradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.

  • gradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.

  • inequality_constraints=nothing: the number $m$ of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.

  • λ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the equality constraints

  • λ_max=20.0: an upper bound for the Lagrange multiplier belonging to the equality constraints

  • λ_min=- λ_max: a lower bound for the Lagrange multiplier belonging to the equality constraints

  • μ=ones(size(h(M,x),1)): the Lagrange multiplier with respect to the inequality constraints

  • μ_max=20.0: an upper bound for the Lagrange multiplier belonging to the inequality constraints

  • ρ=1.0: the penalty parameter

  • τ=0.8: factor for the improvement of the evaluation of the penalty parameter

  • θ_ρ=0.3: the scaling factor of the penalty parameter

  • θ_ϵ=(ϵ_min / ϵ)^(ϵ_exponent): the scaling factor of the exactness

  • sub_cost=[AugmentedLagrangianCost± (@ref)(cmo, ρ, μ, λ): use augmented Lagrangian cost, based on the ConstrainedManifoldObjective build from the functions provided. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.

  • sub_grad=[AugmentedLagrangianGrad](@ref)(cmo, ρ, μ, λ): use augmented Lagrangian gradient, based on the [ConstrainedManifoldObjective](@ref) build from the functions provided. This is used to define thesubproblem=keyword and has hence no effect, if you setsubproblem` directly.

  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.

  • stopping_criterion=StopAfterIteration(300)|(`StopWhenSmallerOrEqual(:ϵ, ϵ_min)&StopWhenChangeLess(1e-10) )[ | ](@ref StopWhenAny)[StopWhenChangeLess](@ref): a functor indicating that the stopping criterion is fulfilled

  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.

  • sub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.as the quasi newton method, the QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used.

  • `substoppingcriterion::StoppingCriterion=StopAfterIteration(300) | StopWhenGradientNormLess(ϵ) | StopWhenStepsizeLess(1e-8),

For the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.AugmentedLagrangianMethodStateType
AugmentedLagrangianMethodState{P,T} <: AbstractManoptSolverState

Describes the augmented Lagrangian method, with

Fields

a default value is given in brackets if a parameter can be left out in initialization.

  • ϵ: the accuracy tolerance
  • ϵ_min: the lower bound for the accuracy tolerance
  • λ: the Lagrange multiplier with respect to the equality constraints
  • λ_max: an upper bound for the Lagrange multiplier belonging to the equality constraints
  • λ_min: a lower bound for the Lagrange multiplier belonging to the equality constraints
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • penalty: evaluation of the current penalty term, initialized to Inf.
  • μ: the Lagrange multiplier with respect to the inequality constraints
  • μ_max: an upper bound for the Lagrange multiplier belonging to the inequality constraints
  • ρ: the penalty parameter
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • τ: factor for the improvement of the evaluation of the penalty parameter
  • θ_ρ: the scaling factor of the penalty parameter
  • θ_ϵ: the scaling factor of the accuracy tolerance
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled

Constructor

AugmentedLagrangianMethodState(M::AbstractManifold, co::ConstrainedManifoldObjective,
     sub_problem, sub_state; kwargs...
 )

construct an augmented Lagrangian method options, where the manifold M and the ConstrainedManifoldObjective co are used for manifold- or objective specific defaults.

AugmentedLagrangianMethodState(M::AbstractManifold, co::ConstrainedManifoldObjective,
     sub_problem; evaluation=AllocatingEvaluation(), kwargs...
-)

construct an augmented Lagrangian method options, where the manifold M and the ConstrainedManifoldObjective co are used for manifold- or objective specific defaults, and sub_problem is a closed form solution with evaluation as type of evaluation.

Keyword arguments

the following keyword arguments are available to initialise the corresponding fields

See also

augmented_Lagrangian_method

source

Helping functions

Manopt.AugmentedLagrangianCostType
AugmentedLagrangianCost{CO,R,T}

Stores the parameters $ρ ∈ ℝ$, $μ ∈ ℝ^m$, $λ ∈ ℝ^n$ of the augmented Lagrangian associated to the ConstrainedManifoldObjective co.

This struct is also a functor (M,p) -> v that can be used as a cost function within a solver, based on the internal ConstrainedManifoldObjective it computes

\[\mathcal L_\rho(p, μ, λ) +)

construct an augmented Lagrangian method options, where the manifold M and the ConstrainedManifoldObjective co are used for manifold- or objective specific defaults, and sub_problem is a closed form solution with evaluation as type of evaluation.

Keyword arguments

the following keyword arguments are available to initialise the corresponding fields

See also

augmented_Lagrangian_method

source

Helping functions

Manopt.AugmentedLagrangianCostType
AugmentedLagrangianCost{CO,R,T}

Stores the parameters $ρ ∈ ℝ$, $μ ∈ ℝ^m$, $λ ∈ ℝ^n$ of the augmented Lagrangian associated to the ConstrainedManifoldObjective co.

This struct is also a functor (M,p) -> v that can be used as a cost function within a solver, based on the internal ConstrainedManifoldObjective it computes

\[\mathcal L_\rho(p, μ, λ) = f(x) + \frac{ρ}{2} \biggl( \sum_{j=1}^n \Bigl( h_j(p) + \frac{λ_j}{ρ} \Bigr)^2 + \sum_{i=1}^m \max\Bigl\{ 0, \frac{μ_i}{ρ} + g_i(p) \Bigr\}^2 -\Bigr)\]

Fields

  • co::CO, ρ::R, μ::T, λ::T as mentioned in the formula, where $R$ should be the

number type used and $T$ the vector type.

Constructor

AugmentedLagrangianCost(co, ρ, μ, λ)
source
Manopt.AugmentedLagrangianGradType
AugmentedLagrangianGrad{CO,R,T} <: AbstractConstrainedFunctor{T}

Stores the parameters $ρ ∈ ℝ$, $μ ∈ ℝ^m$, $λ ∈ ℝ^n$ of the augmented Lagrangian associated to the ConstrainedManifoldObjective co.

This struct is also a functor in both formats

  • (M, p) -> X to compute the gradient in allocating fashion.
  • (M, X, p) to compute the gradient in in-place fashion.

additionally this gradient does accept a positional last argument to specify the range for the internal gradient call of the constrained objective.

based on the internal ConstrainedManifoldObjective and computes the gradient $(_tex(:grad))$(_tex(:Cal, "L"))_{ρ}(p, μ, λ), see also [AugmentedLagrangianCost`](@ref).

Fields

  • co::CO, ρ::R, μ::T, λ::T as mentioned in the formula, where $R$ should be the

number type used and $T$ the vector type.

Constructor

AugmentedLagrangianGrad(co, ρ, μ, λ)
source

Technical details

The augmented_Lagrangian_method solver requires the following functions of a manifold to be available

Literature

[LB19]
C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.
+\Bigr)\]

Fields

  • co::CO, ρ::R, μ::T, λ::T as mentioned in the formula, where $R$ should be the

number type used and $T$ the vector type.

Constructor

AugmentedLagrangianCost(co, ρ, μ, λ)
source
Manopt.AugmentedLagrangianGradType
AugmentedLagrangianGrad{CO,R,T} <: AbstractConstrainedFunctor{T}

Stores the parameters $ρ ∈ ℝ$, $μ ∈ ℝ^m$, $λ ∈ ℝ^n$ of the augmented Lagrangian associated to the ConstrainedManifoldObjective co.

This struct is also a functor in both formats

  • (M, p) -> X to compute the gradient in allocating fashion.
  • (M, X, p) to compute the gradient in in-place fashion.

additionally this gradient does accept a positional last argument to specify the range for the internal gradient call of the constrained objective.

based on the internal ConstrainedManifoldObjective and computes the gradient $(_tex(:grad))$(_tex(:Cal, "L"))_{ρ}(p, μ, λ), see also [AugmentedLagrangianCost`](@ref).

Fields

  • co::CO, ρ::R, μ::T, λ::T as mentioned in the formula, where $R$ should be the

number type used and $T$ the vector type.

Constructor

AugmentedLagrangianGrad(co, ρ, μ, λ)
source

Technical details

The augmented_Lagrangian_method solver requires the following functions of a manifold to be available

Literature

[LB19]
C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.
diff --git a/dev/solvers/cma_es/index.html b/dev/solvers/cma_es/index.html index 50c925cb0e..ffbb5084e5 100644 --- a/dev/solvers/cma_es/index.html +++ b/dev/solvers/cma_es/index.html @@ -1,5 +1,5 @@ -CMA-ES · Manopt.jl

Covariance matrix adaptation evolutionary strategy

The CMA-ES algorithm has been implemented based on [Han23] with basic Riemannian adaptations, related to transport of covariance matrix and its update vectors. Other attempts at adapting CMA-ES to Riemannian optimization include [CFFS10]. The algorithm is suitable for global optimization.

Covariance matrix transport between consecutive mean points is handled by eigenvector_transport! function which is based on the idea of transport of matrix eigenvectors.

Manopt.cma_esFunction
cma_es(M, f, p_m=rand(M); σ::Real=1.0, kwargs...)

Perform covariance matrix adaptation evolutionary strategy search for global gradient-free randomized optimization. It is suitable for complicated non-convex functions. It can be reasonably expected to find global minimum within 3σ distance from p_m.

Implementation is based on [Han23] with basic adaptations to the Riemannian setting.

Input

  • M: a manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ℝ$ to find a minimizer $p^*$ for

Keyword arguments

  • p_m=rand(M): an initial point p
  • σ=1.0: initial standard deviation
  • λ: (4 + Int(floor(3 * log(manifold_dimension(M))))population size (can be increased for a more thorough global search but decreasing is not recommended)
  • tol_fun=1e-12: tolerance for the StopWhenPopulationCostConcentrated, similar to absolute difference between function values at subsequent points
  • tol_x=1e-12: tolerance for the StopWhenPopulationStronglyConcentrated, similar to absolute difference between subsequent point but actually computed from distribution parameters.
  • stopping_criterion=default_cma_es_stopping_criterion(M, λ; tol_fun=tol_fun, tol_x=tol_x): a functor indicating that the stopping criterion is fulfilled
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • basis (DefaultOrthonormalBasis()) basis used to represent covariance in
  • rng=default_rng(): random number generator for generating new points on M

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.CMAESStateType
CMAESState{P,T} <: AbstractManoptSolverState

State of covariance matrix adaptation evolution strategy.

Fields

  • p::P: a point on the manifold $\mathcal M$ storing the best point found so far
  • p_obj objective value at p
  • μ parent number
  • λ population size
  • μ_eff variance effective selection mass for the mean
  • c_1 learning rate for the rank-one update
  • c_c decay rate for cumulation path for the rank-one update
  • c_μ learning rate for the rank-μ update
  • c_σ decay rate for the cumulation path for the step-size control
  • c_m learning rate for the mean
  • d_σ damping parameter for step-size update
  • population population of the current generation
  • ys_c coordinates of random vectors for the current generation
  • covariance_matrix coordinates of the covariance matrix
  • covariance_matrix_eigen eigen decomposition of covariance_matrix
  • covariance_matrix_cond condition number of covariance_matrix, updated after eigen decomposition
  • best_fitness_current_gen best fitness value of individuals in the current generation
  • median_fitness_current_gen median fitness value of individuals in the current generation
  • worst_fitness_current_gen worst fitness value of individuals in the current generation
  • p_m point around which the search for new candidates is done
  • σ step size
  • p_σ coordinates of a vector in $T_{p_m}\mathcal M$
  • p_c coordinates of a vector in $T_{p_m}\mathcal M$
  • deviations standard deviations of coordinate RNG
  • buffer buffer for random number generation and wmean_y_c of length n_coords
  • e_mv_norm expected value of norm of the n_coords-variable standard normal distribution
  • recombination_weights recombination weights used for updating covariance matrix
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • basis a real coefficient basis for covariance matrix
  • rng RNG for generating new points

Constructor

CMAESState(
+CMA-ES · Manopt.jl

Covariance matrix adaptation evolutionary strategy

The CMA-ES algorithm has been implemented based on [Han23] with basic Riemannian adaptations, related to transport of covariance matrix and its update vectors. Other attempts at adapting CMA-ES to Riemannian optimization include [CFFS10]. The algorithm is suitable for global optimization.

Covariance matrix transport between consecutive mean points is handled by eigenvector_transport! function which is based on the idea of transport of matrix eigenvectors.

Manopt.cma_esFunction
cma_es(M, f, p_m=rand(M); σ::Real=1.0, kwargs...)

Perform covariance matrix adaptation evolutionary strategy search for global gradient-free randomized optimization. It is suitable for complicated non-convex functions. It can be reasonably expected to find global minimum within 3σ distance from p_m.

Implementation is based on [Han23] with basic adaptations to the Riemannian setting.

Input

  • M: a manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ℝ$ to find a minimizer $p^*$ for

Keyword arguments

  • p_m=rand(M): an initial point p
  • σ=1.0: initial standard deviation
  • λ: (4 + Int(floor(3 * log(manifold_dimension(M))))population size (can be increased for a more thorough global search but decreasing is not recommended)
  • tol_fun=1e-12: tolerance for the StopWhenPopulationCostConcentrated, similar to absolute difference between function values at subsequent points
  • tol_x=1e-12: tolerance for the StopWhenPopulationStronglyConcentrated, similar to absolute difference between subsequent point but actually computed from distribution parameters.
  • stopping_criterion=default_cma_es_stopping_criterion(M, λ; tol_fun=tol_fun, tol_x=tol_x): a functor indicating that the stopping criterion is fulfilled
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • basis (DefaultOrthonormalBasis()) basis used to represent covariance in
  • rng=default_rng(): random number generator for generating new points on M

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.CMAESStateType
CMAESState{P,T} <: AbstractManoptSolverState

State of covariance matrix adaptation evolution strategy.

Fields

  • p::P: a point on the manifold $\mathcal M$ storing the best point found so far
  • p_obj objective value at p
  • μ parent number
  • λ population size
  • μ_eff variance effective selection mass for the mean
  • c_1 learning rate for the rank-one update
  • c_c decay rate for cumulation path for the rank-one update
  • c_μ learning rate for the rank-μ update
  • c_σ decay rate for the cumulation path for the step-size control
  • c_m learning rate for the mean
  • d_σ damping parameter for step-size update
  • population population of the current generation
  • ys_c coordinates of random vectors for the current generation
  • covariance_matrix coordinates of the covariance matrix
  • covariance_matrix_eigen eigen decomposition of covariance_matrix
  • covariance_matrix_cond condition number of covariance_matrix, updated after eigen decomposition
  • best_fitness_current_gen best fitness value of individuals in the current generation
  • median_fitness_current_gen median fitness value of individuals in the current generation
  • worst_fitness_current_gen worst fitness value of individuals in the current generation
  • p_m point around which the search for new candidates is done
  • σ step size
  • p_σ coordinates of a vector in $T_{p_m}\mathcal M$
  • p_c coordinates of a vector in $T_{p_m}\mathcal M$
  • deviations standard deviations of coordinate RNG
  • buffer buffer for random number generation and wmean_y_c of length n_coords
  • e_mv_norm expected value of norm of the n_coords-variable standard normal distribution
  • recombination_weights recombination weights used for updating covariance matrix
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • basis a real coefficient basis for covariance matrix
  • rng RNG for generating new points

Constructor

CMAESState(
     M::AbstractManifold,
     p_m::P,
     μ::Int,
@@ -27,11 +27,11 @@
     TVTM<:AbstractVectorTransportMethod,
     TB<:AbstractBasis,
     TRng<:AbstractRNG,
-}

See also

cma_es

source

Stopping criteria

Manopt.StopWhenBestCostInGenerationConstantType
StopWhenBestCostInGenerationConstant <: StoppingCriterion

Stop if the range of the best objective function values of the last iteration_range generations is zero. This corresponds to EqualFUnValues condition from [Han23].

See also StopWhenPopulationCostConcentrated.

source
Manopt.StopWhenEvolutionStagnatesType
StopWhenEvolutionStagnates{TParam<:Real} <: StoppingCriterion

The best and median fitness in each iteration is tracked over the last 20% but at least min_size and no more than max_size iterations. Solver is stopped if in both histories the median of the most recent fraction of values is not better than the median of the oldest fraction.

source
Manopt.StopWhenPopulationCostConcentratedType
StopWhenPopulationCostConcentrated{TParam<:Real} <: StoppingCriterion

Stop if the range of the best objective function value in the last max_size generations and all function values in the current generation is below tol. This corresponds to TolFun condition from [Han23].

Constructor

StopWhenPopulationCostConcentrated(tol::Real, max_size::Int)
source
Manopt.StopWhenPopulationDivergesType
StopWhenPopulationDiverges{TParam<:Real} <: StoppingCriterion

Stop if σ times maximum deviation increased by more than tol. This usually indicates a far too small σ, or divergent behavior. This corresponds to TolXUp condition from [Han23].

source
Manopt.StopWhenPopulationStronglyConcentratedType
StopWhenPopulationStronglyConcentrated{TParam<:Real} <: StoppingCriterion

Stop if the standard deviation in all coordinates is smaller than tol and norm of σ * p_c is smaller than tol. This corresponds to TolX condition from [Han23].

Fields

  • tol the tolerance to verify against
  • at_iteration an internal field to indicate at with iteration $i \geq 0$ the tolerance was met.

Constructor

StopWhenPopulationStronglyConcentrated(tol::Real)
source

Technical details

The cma_es solver requires the following functions of a manifold to be available

Internal helpers

You may add new methods to eigenvector_transport! if you know a more optimized implementation for your manifold.

Stopping criteria

Manopt.StopWhenBestCostInGenerationConstantType
StopWhenBestCostInGenerationConstant <: StoppingCriterion

Stop if the range of the best objective function values of the last iteration_range generations is zero. This corresponds to EqualFUnValues condition from [Han23].

See also StopWhenPopulationCostConcentrated.

source
Manopt.StopWhenEvolutionStagnatesType
StopWhenEvolutionStagnates{TParam<:Real} <: StoppingCriterion

The best and median fitness in each iteration is tracked over the last 20% but at least min_size and no more than max_size iterations. Solver is stopped if in both histories the median of the most recent fraction of values is not better than the median of the oldest fraction.

source
Manopt.StopWhenPopulationCostConcentratedType
StopWhenPopulationCostConcentrated{TParam<:Real} <: StoppingCriterion

Stop if the range of the best objective function value in the last max_size generations and all function values in the current generation is below tol. This corresponds to TolFun condition from [Han23].

Constructor

StopWhenPopulationCostConcentrated(tol::Real, max_size::Int)
source
Manopt.StopWhenPopulationDivergesType
StopWhenPopulationDiverges{TParam<:Real} <: StoppingCriterion

Stop if σ times maximum deviation increased by more than tol. This usually indicates a far too small σ, or divergent behavior. This corresponds to TolXUp condition from [Han23].

source
Manopt.StopWhenPopulationStronglyConcentratedType
StopWhenPopulationStronglyConcentrated{TParam<:Real} <: StoppingCriterion

Stop if the standard deviation in all coordinates is smaller than tol and norm of σ * p_c is smaller than tol. This corresponds to TolX condition from [Han23].

Fields

  • tol the tolerance to verify against
  • at_iteration an internal field to indicate at with iteration $i \geq 0$ the tolerance was met.

Constructor

StopWhenPopulationStronglyConcentrated(tol::Real)
source

Technical details

The cma_es solver requires the following functions of a manifold to be available

Internal helpers

You may add new methods to eigenvector_transport! if you know a more optimized implementation for your manifold.

Manopt.eigenvector_transport!Function
eigenvector_transport!(
     M::AbstractManifold,
     matrix_eigen::Eigen,
     p,
     q,
     basis::AbstractBasis,
     vtm::AbstractVectorTransportMethod,
-)

Transport the matrix with matrix_eig eigen decomposition when expanded in basis from point p to point q on M. Update matrix_eigen in-place.

(p, matrix_eig) belongs to the fiber bundle of $B = \mathcal M × SPD(n)$, where n is the (real) dimension of M. The function corresponds to the Ehresmann connection defined by vector transport vtm of eigenvectors of matrix_eigen.

source

Literature

+)

Transport the matrix with matrix_eig eigen decomposition when expanded in basis from point p to point q on M. Update matrix_eigen in-place.

(p, matrix_eig) belongs to the fiber bundle of $B = \mathcal M × SPD(n)$, where n is the (real) dimension of M. The function corresponds to the Ehresmann connection defined by vector transport vtm of eigenvectors of matrix_eigen.

source

Literature

diff --git a/dev/solvers/conjugate_gradient_descent/index.html b/dev/solvers/conjugate_gradient_descent/index.html index 61b87899e1..1a1672af59 100644 --- a/dev/solvers/conjugate_gradient_descent/index.html +++ b/dev/solvers/conjugate_gradient_descent/index.html @@ -2,24 +2,24 @@ Conjugate gradient descent · Manopt.jl

Conjugate gradient descent

Manopt.conjugate_gradient_descentFunction
conjugate_gradient_descent(M, f, grad_f, p=rand(M))
 conjugate_gradient_descent!(M, f, grad_f, p)
 conjugate_gradient_descent(M, gradient_objective, p)
-conjugate_gradient_descent!(M, gradient_objective, p; kwargs...)

perform a conjugate gradient based descent-

\[p_{k+1} = \operatorname{retr}_{p_k} \bigl( s_kδ_k \bigr),\]

where $\operatorname{retr}$ denotes a retraction on the Manifold M and one can employ different rules to update the descent direction $δ_k$ based on the last direction $δ_{k-1}$ and both gradients $\operatorname{grad}f(x_k)$,$\operatorname{grad} f(x_{k-1})$. The Stepsize $s_k$ may be determined by a Linesearch.

Alternatively to f and grad_f you can provide the AbstractManifoldGradientObjective gradient_objective directly.

Available update rules are SteepestDescentCoefficientRule, which yields a gradient_descent, ConjugateDescentCoefficient (the default), DaiYuanCoefficientRule, FletcherReevesCoefficient, HagerZhangCoefficient, HestenesStiefelCoefficient, LiuStoreyCoefficient, and PolakRibiereCoefficient. These can all be combined with a ConjugateGradientBealeRestartRule rule.

They all compute $β_k$ such that this algorithm updates the search direction as

\[δ_k=\operatorname{grad}f(p_k) + β_k \delta_{k-1}\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.conjugate_gradient_descent!Function
conjugate_gradient_descent(M, f, grad_f, p=rand(M))
+conjugate_gradient_descent!(M, gradient_objective, p; kwargs...)

perform a conjugate gradient based descent-

\[p_{k+1} = \operatorname{retr}_{p_k} \bigl( s_kδ_k \bigr),\]

where $\operatorname{retr}$ denotes a retraction on the Manifold M and one can employ different rules to update the descent direction $δ_k$ based on the last direction $δ_{k-1}$ and both gradients $\operatorname{grad}f(x_k)$,$\operatorname{grad} f(x_{k-1})$. The Stepsize $s_k$ may be determined by a Linesearch.

Alternatively to f and grad_f you can provide the AbstractManifoldGradientObjective gradient_objective directly.

Available update rules are SteepestDescentCoefficientRule, which yields a gradient_descent, ConjugateDescentCoefficient (the default), DaiYuanCoefficientRule, FletcherReevesCoefficient, HagerZhangCoefficient, HestenesStiefelCoefficient, LiuStoreyCoefficient, and PolakRibiereCoefficient. These can all be combined with a ConjugateGradientBealeRestartRule rule.

They all compute $β_k$ such that this algorithm updates the search direction as

\[δ_k=\operatorname{grad}f(p_k) + β_k \delta_{k-1}\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.conjugate_gradient_descent!Function
conjugate_gradient_descent(M, f, grad_f, p=rand(M))
 conjugate_gradient_descent!(M, f, grad_f, p)
 conjugate_gradient_descent(M, gradient_objective, p)
-conjugate_gradient_descent!(M, gradient_objective, p; kwargs...)

perform a conjugate gradient based descent-

\[p_{k+1} = \operatorname{retr}_{p_k} \bigl( s_kδ_k \bigr),\]

where $\operatorname{retr}$ denotes a retraction on the Manifold M and one can employ different rules to update the descent direction $δ_k$ based on the last direction $δ_{k-1}$ and both gradients $\operatorname{grad}f(x_k)$,$\operatorname{grad} f(x_{k-1})$. The Stepsize $s_k$ may be determined by a Linesearch.

Alternatively to f and grad_f you can provide the AbstractManifoldGradientObjective gradient_objective directly.

Available update rules are SteepestDescentCoefficientRule, which yields a gradient_descent, ConjugateDescentCoefficient (the default), DaiYuanCoefficientRule, FletcherReevesCoefficient, HagerZhangCoefficient, HestenesStiefelCoefficient, LiuStoreyCoefficient, and PolakRibiereCoefficient. These can all be combined with a ConjugateGradientBealeRestartRule rule.

They all compute $β_k$ such that this algorithm updates the search direction as

\[δ_k=\operatorname{grad}f(p_k) + β_k \delta_{k-1}\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ConjugateGradientDescentStateType
ConjugateGradientState <: AbstractGradientSolverState

specify options for a conjugate gradient descent algorithm, that solves a [DefaultManoptProblem].

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • δ: the current descent direction, also a tangent vector
  • β: the current update coefficient rule, see .
  • coefficient: function to determine the new β
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Constructor

ConjugateGradientState(M::AbstractManifold; kwargs...)

where the last five fields can be set by their names as keyword and the X can be set to a tangent vector type using the keyword initial_gradient which defaults to zero_vector(M,p), and δ is initialized to a copy of this vector.

Keyword arguments

The following fields from above <re keyword arguments

See also

conjugate_gradient_descent, DefaultManoptProblem, ArmijoLinesearch

source

Available coefficients

The update rules act as DirectionUpdateRule, which internally always first evaluate the gradient itself.

Manopt.ConjugateDescentCoefficientFunction
ConjugateDescentCoefficient()
-ConjugateDescentCoefficient(M::AbstractManifold)

Compute the (classical) conjugate gradient coefficient based on [Fle87] adapted to manifolds

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Then the coefficient reads

\[β_k = \frac{\lVert X_{k+1} \rVert_{p_{k+1}}^2}{⟨-δ_k,X_k⟩_{p_k}}\]

Info

This function generates a ManifoldDefaultsFactory for ConjugateDescentCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.ConjugateGradientBealeRestartFunction
ConjugateGradientBealeRestart(direction_update::Union{DirectionUpdateRule,ManifoldDefaultsFactory}; kwargs...)
-ConjugateGradientBealeRestart(M::AbstractManifold, direction_update::Union{DirectionUpdateRule,ManifoldDefaultsFactory}; kwargs...)

Compute a conjugate gradient coefficient with a potential restart, when two directions are nearly orthogonal. See [HZ06, page 12] (in the preprint, page 46 in Journal page numbers). This method is named after E. Beale from his proceedings paper in 1972 [Bea72]. This method acts as a decorator to any existing DirectionUpdateRule direction_update.

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Then a restart is performed, hence $β_k = 0$ returned if

\[ \frac{⟨X_{k+1}, \mathcal T_{p_{k+1}←p_k}X_k⟩}{\lVert X_k \rVert_{p_k}} > ε,\]

where $ε$ is the threshold, which is set by default to 0.2, see [Pow77]

Input

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for ConjugateGradientBealeRestartRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.DaiYuanCoefficientFunction
DaiYuanCoefficient(; kwargs...)
+conjugate_gradient_descent!(M, gradient_objective, p; kwargs...)

perform a conjugate gradient based descent-

\[p_{k+1} = \operatorname{retr}_{p_k} \bigl( s_kδ_k \bigr),\]

where $\operatorname{retr}$ denotes a retraction on the Manifold M and one can employ different rules to update the descent direction $δ_k$ based on the last direction $δ_{k-1}$ and both gradients $\operatorname{grad}f(x_k)$,$\operatorname{grad} f(x_{k-1})$. The Stepsize $s_k$ may be determined by a Linesearch.

Alternatively to f and grad_f you can provide the AbstractManifoldGradientObjective gradient_objective directly.

Available update rules are SteepestDescentCoefficientRule, which yields a gradient_descent, ConjugateDescentCoefficient (the default), DaiYuanCoefficientRule, FletcherReevesCoefficient, HagerZhangCoefficient, HestenesStiefelCoefficient, LiuStoreyCoefficient, and PolakRibiereCoefficient. These can all be combined with a ConjugateGradientBealeRestartRule rule.

They all compute $β_k$ such that this algorithm updates the search direction as

\[δ_k=\operatorname{grad}f(p_k) + β_k \delta_{k-1}\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ConjugateGradientDescentStateType
ConjugateGradientState <: AbstractGradientSolverState

specify options for a conjugate gradient descent algorithm, that solves a [DefaultManoptProblem].

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • δ: the current descent direction, also a tangent vector
  • β: the current update coefficient rule, see .
  • coefficient: function to determine the new β
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Constructor

ConjugateGradientState(M::AbstractManifold; kwargs...)

where the last five fields can be set by their names as keyword and the X can be set to a tangent vector type using the keyword initial_gradient which defaults to zero_vector(M,p), and δ is initialized to a copy of this vector.

Keyword arguments

The following fields from above <re keyword arguments

See also

conjugate_gradient_descent, DefaultManoptProblem, ArmijoLinesearch

source

Available coefficients

The update rules act as DirectionUpdateRule, which internally always first evaluate the gradient itself.

Manopt.ConjugateDescentCoefficientFunction
ConjugateDescentCoefficient()
+ConjugateDescentCoefficient(M::AbstractManifold)

Compute the (classical) conjugate gradient coefficient based on [Fle87] adapted to manifolds

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Then the coefficient reads

\[β_k = \frac{\lVert X_{k+1} \rVert_{p_{k+1}}^2}{⟨-δ_k,X_k⟩_{p_k}}\]

Info

This function generates a ManifoldDefaultsFactory for ConjugateDescentCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.ConjugateGradientBealeRestartFunction
ConjugateGradientBealeRestart(direction_update::Union{DirectionUpdateRule,ManifoldDefaultsFactory}; kwargs...)
+ConjugateGradientBealeRestart(M::AbstractManifold, direction_update::Union{DirectionUpdateRule,ManifoldDefaultsFactory}; kwargs...)

Compute a conjugate gradient coefficient with a potential restart, when two directions are nearly orthogonal. See [HZ06, page 12] (in the preprint, page 46 in Journal page numbers). This method is named after E. Beale from his proceedings paper in 1972 [Bea72]. This method acts as a decorator to any existing DirectionUpdateRule direction_update.

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Then a restart is performed, hence $β_k = 0$ returned if

\[ \frac{⟨X_{k+1}, \mathcal T_{p_{k+1}←p_k}X_k⟩}{\lVert X_k \rVert_{p_k}} > ε,\]

where $ε$ is the threshold, which is set by default to 0.2, see [Pow77]

Input

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for ConjugateGradientBealeRestartRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.DaiYuanCoefficientFunction
DaiYuanCoefficient(; kwargs...)
 DaiYuanCoefficient(M::AbstractManifold; kwargs...)

Computes an update coefficient for the conjugate_gradient_descent algorithm based on [DY99] adapted to Riemannian manifolds.

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Let $ν_k = X_{k+1} - \mathcal T_{p_{k+1}←p_k}X_k$, where $\mathcal T_{⋅←⋅}$ denotes a vector transport.

Then the coefficient reads

\[β_k = -\frac{\lVert X_{k+1} \rVert_{p_{k+1}}^2}{⟨\mathcal T_{p_{k+1}←p_k}δ_k, ν_k⟩_{p_{k+1}}}\]

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for DaiYuanCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.FletcherReevesCoefficientFunction
FletcherReevesCoefficient()
+\frac{\lVert X_{k+1} \rVert_{p_{k+1}}^2}{⟨\mathcal T_{p_{k+1}←p_k}δ_k, ν_k⟩_{p_{k+1}}}\]

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for DaiYuanCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.FletcherReevesCoefficientFunction
FletcherReevesCoefficient()
 FletcherReevesCoefficient(M::AbstractManifold)

Computes an update coefficient for the conjugate_gradient_descent algorithm based on [FR64] adapted to manifolds

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Then the coefficient reads

\[β_k = -\frac{\lVert X_{k+1} \rVert_{p_{k+1}}^2}{\lVert X_k \rVert_{p_{k}}^2}.\]

Info

This function generates a ManifoldDefaultsFactory for FletcherReevesCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.HagerZhangCoefficientFunction
HagerZhangCoefficient(; kwargs...)
 HagerZhangCoefficient(M::AbstractManifold; kwargs...)

Computes an update coefficient for the conjugate_gradient_descent algorithm based on [FR64] adapted to manifolds

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Let $ν_k = X_{k+1} - \mathcal T_{p_{k+1}←p_k}X_k$, where $\mathcal T_{⋅←⋅}$ denotes a vector transport.

Then the coefficient reads

\[β_k = \Bigl⟨ν_k - \frac{2\lVert ν_k \rVert_{p_{k+1}}^2}{⟨\mathcal T_{p_{k+1}←p_k}δ_k, ν_k⟩_{p_{k+1}}} \mathcal T_{p_{k+1}←p_k}δ_k, \frac{X_{k+1}}{⟨\mathcal T_{p_{k+1}←p_k}δ_k, ν_k⟩_{p_{k+1}}} -\Bigr⟩_{p_{k+1}}.\]

This method includes a numerical stability proposed by those authors.

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for HagerZhangCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.HestenesStiefelCoefficientFunction
HestenesStiefelCoefficient(; kwargs...)
-HestenesStiefelCoefficient(M::AbstractManifold; kwargs...)

Computes an update coefficient for the conjugate_gradient_descent algorithm based on [HS52] adapted to manifolds

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Let $ν_k = X_{k+1} - \mathcal T_{p_{k+1}←p_k}X_k$, where $\mathcal T_{⋅←⋅}$ denotes a vector transport.

Then the coefficient reads

\[β_k = \frac{⟨ X_{k+1}, ν_k ⟩_{p_{k+1}}}{⟨ \mathcal T_{p_{k+1}←p_k}δ_k, ν_k⟩_{p_{k+1}}}.\]

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for HestenesStiefelCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.LiuStoreyCoefficientFunction
LiuStoreyCoefficient(; kwargs...)
-LiuStoreyCoefficient(M::AbstractManifold; kwargs...)

Computes an update coefficient for the conjugate_gradient_descent algorithm based on [LS91] adapted to manifolds

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Let $ν_k = X_{k+1} - \mathcal T_{p_{k+1}←p_k}X_k$, where $\mathcal T_{⋅←⋅}$ denotes a vector transport.

Then the coefficient reads

\[β_k = - \frac{⟨ X_{k+1},ν_k ⟩_{p_{k+1}}}{⟨ δ_k,X_k ⟩_{p_k}}.\]

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for LiuStoreyCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.PolakRibiereCoefficientFunction
PolakRibiereCoefficient(; kwargs...)
-PolakRibiereCoefficient(M::AbstractManifold; kwargs...)

Computes an update coefficient for the conjugate_gradient_descent algorithm based on [PR69] adapted to Riemannian manifolds.

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Let $ν_k = X_{k+1} - \mathcal T_{p_{k+1}←p_k}X_k$, where $\mathcal T_{⋅←⋅}$ denotes a vector transport.

Then the coefficient reads

\[β_k = \frac{⟨ X_{k+1}, ν_k ⟩_{p_{k+1}}}{\lVert X_k \rVert_{{p_k}}^2}.\]

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for PolakRibiereCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.SteepestDescentCoefficientFunction
SteepestDescentCoefficient()
-SteepestDescentCoefficient(M::AbstractManifold)

Computes an update coefficient for the conjugate_gradient_descent algorithm so that is falls back to a gradient_descent method, that is

\[β_k = 0\]

Info

This function generates a ManifoldDefaultsFactory for SteepestDescentCoefficient. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source

Internal rules for coefficients

Manopt.ConjugateGradientBealeRestartRuleType
ConjugateGradientBealeRestartRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on a restart idea of [Bea72], following [HZ06, page 12] adapted to manifolds.

Fields

  • direction_update::DirectionUpdateRule: the actual rule, that is restarted
  • threshold::Real: a threshold for the restart check.
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Constructor

ConjugateGradientBealeRestartRule(
+\Bigr⟩_{p_{k+1}}.\]

This method includes a numerical stability proposed by those authors.

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for HagerZhangCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.HestenesStiefelCoefficientFunction
HestenesStiefelCoefficient(; kwargs...)
+HestenesStiefelCoefficient(M::AbstractManifold; kwargs...)

Computes an update coefficient for the conjugate_gradient_descent algorithm based on [HS52] adapted to manifolds

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Let $ν_k = X_{k+1} - \mathcal T_{p_{k+1}←p_k}X_k$, where $\mathcal T_{⋅←⋅}$ denotes a vector transport.

Then the coefficient reads

\[β_k = \frac{⟨ X_{k+1}, ν_k ⟩_{p_{k+1}}}{⟨ \mathcal T_{p_{k+1}←p_k}δ_k, ν_k⟩_{p_{k+1}}}.\]

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for HestenesStiefelCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.LiuStoreyCoefficientFunction
LiuStoreyCoefficient(; kwargs...)
+LiuStoreyCoefficient(M::AbstractManifold; kwargs...)

Computes an update coefficient for the conjugate_gradient_descent algorithm based on [LS91] adapted to manifolds

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Let $ν_k = X_{k+1} - \mathcal T_{p_{k+1}←p_k}X_k$, where $\mathcal T_{⋅←⋅}$ denotes a vector transport.

Then the coefficient reads

\[β_k = - \frac{⟨ X_{k+1},ν_k ⟩_{p_{k+1}}}{⟨ δ_k,X_k ⟩_{p_k}}.\]

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for LiuStoreyCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.PolakRibiereCoefficientFunction
PolakRibiereCoefficient(; kwargs...)
+PolakRibiereCoefficient(M::AbstractManifold; kwargs...)

Computes an update coefficient for the conjugate_gradient_descent algorithm based on [PR69] adapted to Riemannian manifolds.

Denote the last iterate and gradient by $p_k,X_k$, the current iterate and gradient by $p_{k+1}, X_{k+1}$, respectively, as well as the last update direction by $δ_k$.

Let $ν_k = X_{k+1} - \mathcal T_{p_{k+1}←p_k}X_k$, where $\mathcal T_{⋅←⋅}$ denotes a vector transport.

Then the coefficient reads

\[β_k = \frac{⟨ X_{k+1}, ν_k ⟩_{p_{k+1}}}{\lVert X_k \rVert_{{p_k}}^2}.\]

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for PolakRibiereCoefficientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.SteepestDescentCoefficientFunction
SteepestDescentCoefficient()
+SteepestDescentCoefficient(M::AbstractManifold)

Computes an update coefficient for the conjugate_gradient_descent algorithm so that is falls back to a gradient_descent method, that is

\[β_k = 0\]

Info

This function generates a ManifoldDefaultsFactory for SteepestDescentCoefficient. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source

Internal rules for coefficients

Manopt.ConjugateGradientBealeRestartRuleType
ConjugateGradientBealeRestartRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on a restart idea of [Bea72], following [HZ06, page 12] adapted to manifolds.

Fields

  • direction_update::DirectionUpdateRule: the actual rule, that is restarted
  • threshold::Real: a threshold for the restart check.
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Constructor

ConjugateGradientBealeRestartRule(
     direction_update::Union{DirectionUpdateRule,ManifoldDefaultsFactory};
     kwargs...
 )
@@ -27,4 +27,4 @@
     M::AbstractManifold=DefaultManifold(),
     direction_update::Union{DirectionUpdateRule,ManifoldDefaultsFactory};
     kwargs...
-)

Construct the Beale restart coefficient update rule adapted to manifolds.

Input

Keyword arguments

See also

ConjugateGradientBealeRestart, conjugate_gradient_descent

source
Manopt.DaiYuanCoefficientRuleType
DaiYuanCoefficientRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [DY99] adapted to manifolds

Fields

Constructor

DaiYuanCoefficientRule(M::AbstractManifold; kwargs...)

Construct the Dai—Yuan coefficient update rule.

Keyword arguments

See also

DaiYuanCoefficient, conjugate_gradient_descent

source
Manopt.HagerZhangCoefficientRuleType
HagerZhangCoefficientRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [HZ05] adapted to manifolds

Fields

Constructor

HagerZhangCoefficientRule(M::AbstractManifold; kwargs...)

Construct the Hager-Zang coefficient update rule based on [HZ05] adapted to manifolds.

Keyword arguments

See also

HagerZhangCoefficient, conjugate_gradient_descent

source
Manopt.HestenesStiefelCoefficientRuleType
HestenesStiefelCoefficientRuleRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [HS52] adapted to manifolds

Fields

Constructor

HestenesStiefelCoefficientRuleRule(M::AbstractManifold; kwargs...)

Construct the Hestenes-Stiefel coefficient update rule based on [HS52] adapted to manifolds.

Keyword arguments

See also

HestenesStiefelCoefficient, conjugate_gradient_descent

source
Manopt.LiuStoreyCoefficientRuleType
LiuStoreyCoefficientRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [LS91] adapted to manifolds

Fields

Constructor

LiuStoreyCoefficientRule(M::AbstractManifold; kwargs...)

Construct the Lui-Storey coefficient update rule based on [LS91] adapted to manifolds.

Keyword arguments

See also

LiuStoreyCoefficient, conjugate_gradient_descent

source
Manopt.PolakRibiereCoefficientRuleType
PolakRibiereCoefficientRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [PR69] adapted to manifolds

Fields

Constructor

PolakRibiereCoefficientRule(M::AbstractManifold; kwargs...)

Construct the Dai—Yuan coefficient update rule.

Keyword arguments

See also

PolakRibiereCoefficient, conjugate_gradient_descent

source

Technical details

The conjugate_gradient_descent solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.
  • A vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= or vector_transport_method_dual= (for $\mathcal N$) does not have to be specified.
  • By default gradient descent uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).
  • By default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.
  • By default the tangent vector storing the gradient is initialized calling zero_vector(M,p).

Literature

[Bea72]
E. M. Beale. A derivation of conjugate gradients. In: Numerical methods for nonlinear optimization, edited by F. A. Lootsma (Academic Press, London, London, 1972); pp. 39–43.
[DY99]
Y. H. Dai and Y. Yuan. A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM Journal on Optimization 10, 177–182 (1999).
[Fle87]
R. Fletcher. Practical Methods of Optimization. 2 Edition, A Wiley-Interscience Publication (John Wiley & Sons Ltd., 1987).
[FR64]
R. Fletcher and C. M. Reeves. Function minimization by conjugate gradients. The Computer Journal 7, 149–154 (1964).
[HZ06]
W. W. Hager and H. Zhang. A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization 2, 35–58 (2006).
[HZ05]
W. W. Hager and H. Zhang. A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search. SIAM Journal on Optimization 16, 170–192 (2005).
[HS52]
M. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards 49, 409 (1952).
[LS91]
Y. Liu and C. Storey. Efficient generalized conjugate gradient algorithms, part 1: Theory. Journal of Optimization Theory and Applications 69, 129–137 (1991).
[PR69]
E. Polak and G. Ribière. Note sur la convergence de méthodes de directions conjuguées. Revue française d’informatique et de recherche opérationnelle 3, 35–43 (1969).
[Pow77]
M. J. Powell. Restart procedures for the conjugate gradient method. Mathematical Programming 12, 241–254 (1977).
+)

Construct the Beale restart coefficient update rule adapted to manifolds.

Input

Keyword arguments

See also

ConjugateGradientBealeRestart, conjugate_gradient_descent

source
Manopt.DaiYuanCoefficientRuleType
DaiYuanCoefficientRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [DY99] adapted to manifolds

Fields

Constructor

DaiYuanCoefficientRule(M::AbstractManifold; kwargs...)

Construct the Dai—Yuan coefficient update rule.

Keyword arguments

See also

DaiYuanCoefficient, conjugate_gradient_descent

source
Manopt.HagerZhangCoefficientRuleType
HagerZhangCoefficientRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [HZ05] adapted to manifolds

Fields

Constructor

HagerZhangCoefficientRule(M::AbstractManifold; kwargs...)

Construct the Hager-Zang coefficient update rule based on [HZ05] adapted to manifolds.

Keyword arguments

See also

HagerZhangCoefficient, conjugate_gradient_descent

source
Manopt.HestenesStiefelCoefficientRuleType
HestenesStiefelCoefficientRuleRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [HS52] adapted to manifolds

Fields

Constructor

HestenesStiefelCoefficientRuleRule(M::AbstractManifold; kwargs...)

Construct the Hestenes-Stiefel coefficient update rule based on [HS52] adapted to manifolds.

Keyword arguments

See also

HestenesStiefelCoefficient, conjugate_gradient_descent

source
Manopt.LiuStoreyCoefficientRuleType
LiuStoreyCoefficientRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [LS91] adapted to manifolds

Fields

Constructor

LiuStoreyCoefficientRule(M::AbstractManifold; kwargs...)

Construct the Lui-Storey coefficient update rule based on [LS91] adapted to manifolds.

Keyword arguments

See also

LiuStoreyCoefficient, conjugate_gradient_descent

source
Manopt.PolakRibiereCoefficientRuleType
PolakRibiereCoefficientRule <: DirectionUpdateRule

A functor (problem, state, k) -> β_k to compute the conjugate gradient update coefficient based on [PR69] adapted to manifolds

Fields

Constructor

PolakRibiereCoefficientRule(M::AbstractManifold; kwargs...)

Construct the Dai—Yuan coefficient update rule.

Keyword arguments

See also

PolakRibiereCoefficient, conjugate_gradient_descent

source

Technical details

The conjugate_gradient_descent solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.
  • A vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= or vector_transport_method_dual= (for $\mathcal N$) does not have to be specified.
  • By default gradient descent uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).
  • By default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.
  • By default the tangent vector storing the gradient is initialized calling zero_vector(M,p).

Literature

[Bea72]
E. M. Beale. A derivation of conjugate gradients. In: Numerical methods for nonlinear optimization, edited by F. A. Lootsma (Academic Press, London, London, 1972); pp. 39–43.
[DY99]
Y. H. Dai and Y. Yuan. A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM Journal on Optimization 10, 177–182 (1999).
[Fle87]
R. Fletcher. Practical Methods of Optimization. 2 Edition, A Wiley-Interscience Publication (John Wiley & Sons Ltd., 1987).
[FR64]
R. Fletcher and C. M. Reeves. Function minimization by conjugate gradients. The Computer Journal 7, 149–154 (1964).
[HZ06]
W. W. Hager and H. Zhang. A survey of nonlinear conjugate gradient methods. Pacific Journal of Optimization 2, 35–58 (2006).
[HZ05]
W. W. Hager and H. Zhang. A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search. SIAM Journal on Optimization 16, 170–192 (2005).
[HS52]
M. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards 49, 409 (1952).
[LS91]
Y. Liu and C. Storey. Efficient generalized conjugate gradient algorithms, part 1: Theory. Journal of Optimization Theory and Applications 69, 129–137 (1991).
[PR69]
E. Polak and G. Ribière. Note sur la convergence de méthodes de directions conjuguées. Revue française d’informatique et de recherche opérationnelle 3, 35–43 (1969).
[Pow77]
M. J. Powell. Restart procedures for the conjugate gradient method. Mathematical Programming 12, 241–254 (1977).
diff --git a/dev/solvers/conjugate_residual/index.html b/dev/solvers/conjugate_residual/index.html index c0d1934b25..1f369d2172 100644 --- a/dev/solvers/conjugate_residual/index.html +++ b/dev/solvers/conjugate_residual/index.html @@ -2,7 +2,7 @@ Conjugate Residual · Manopt.jl

Conjugate residual solver in a Tangent space

Manopt.conjugate_residualFunction
conjugate_residual(TpM::TangentSpace, A, b, X=zero_vector(TpM))
 conjugate_residual(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X=zero_vector(TpM))
 conjugate_residual!(TpM::TangentSpace, A, b, X)
-conjugate_residual!(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)

Compute the solution of $\mathcal A(p)[X] + b(p) = 0_p$, where

  • $\mathcal A$ is a linear, symmetric operator on $T_{p}\mathcal M$
  • $b$ is a vector field on the manifold
  • $X ∈ T_{p}\mathcal M$ is a tangent vector
  • $0_p$ is the zero vector $T_{p}\mathcal M$.

This implementation follows Algorithm 3 in [LY24] and is initalised with $X^{(0)}$ as the zero vector and

  • the initial residual $r^{(0)} = -b(p) - \mathcal A(p)[X^{(0)}]$
  • the initial conjugate direction $d^{(0)} = r^{(0)}$
  • initialize $Y^{(0)} = \mathcal A(p)[X^{(0)}]$

performed the following steps at iteration $k=0,…$ until the stopping_criterion is fulfilled.

  1. compute a step size $α_k = \displaystyle\frac{⟨ r^{(k)}, \mathcal A(p)[r^{(k)}] ⟩_p}{⟨ \mathcal A(p)[d^{(k)}], \mathcal A(p)[d^{(k)}] ⟩_p}$
  2. do a step $X^{(k+1)} = X^{(k)} + α_kd^{(k)}$
  3. update the residual $r^{(k+1)} = r^{(k)} + α_k Y^{(k)}$
  4. compute $Z = \mathcal A(p)[r^{(k+1)}]$
  5. Update the conjugate coefficient $β_k = \displaystyle\frac{⟨ r^{(k+1)}, \mathcal A(p)[r^{(k+1)}] ⟩_p}{⟨ r^{(k)}, \mathcal A(p)[r^{(k)}] ⟩_p}$
  6. Update the conjugate direction $d^{(k+1)} = r^{(k+1)} + β_kd^{(k)}$
  7. Update $Y^{(k+1)} = -Z + β_k Y^{(k)}$

Note that the right hand side of Step 7 is the same as evaluating $\mathcal A[d^{(k+1)}]$, but avoids the actual evaluation

Input

  • TpM the TangentSpace as the domain
  • A a symmetric linear operator on the tangent space (M, p, X) -> Y
  • b a vector field on the tangent space (M, p) -> X
  • X the initial tangent vector

Keyword arguments

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.conjugate_residual!Function
conjugate_residual(TpM::TangentSpace, A, b, X=zero_vector(TpM))
+conjugate_residual!(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)

Compute the solution of $\mathcal A(p)[X] + b(p) = 0_p$, where

  • $\mathcal A$ is a linear, symmetric operator on $T_{p}\mathcal M$
  • $b$ is a vector field on the manifold
  • $X ∈ T_{p}\mathcal M$ is a tangent vector
  • $0_p$ is the zero vector $T_{p}\mathcal M$.

This implementation follows Algorithm 3 in [LY24] and is initalised with $X^{(0)}$ as the zero vector and

  • the initial residual $r^{(0)} = -b(p) - \mathcal A(p)[X^{(0)}]$
  • the initial conjugate direction $d^{(0)} = r^{(0)}$
  • initialize $Y^{(0)} = \mathcal A(p)[X^{(0)}]$

performed the following steps at iteration $k=0,…$ until the stopping_criterion is fulfilled.

  1. compute a step size $α_k = \displaystyle\frac{⟨ r^{(k)}, \mathcal A(p)[r^{(k)}] ⟩_p}{⟨ \mathcal A(p)[d^{(k)}], \mathcal A(p)[d^{(k)}] ⟩_p}$
  2. do a step $X^{(k+1)} = X^{(k)} + α_kd^{(k)}$
  3. update the residual $r^{(k+1)} = r^{(k)} + α_k Y^{(k)}$
  4. compute $Z = \mathcal A(p)[r^{(k+1)}]$
  5. Update the conjugate coefficient $β_k = \displaystyle\frac{⟨ r^{(k+1)}, \mathcal A(p)[r^{(k+1)}] ⟩_p}{⟨ r^{(k)}, \mathcal A(p)[r^{(k)}] ⟩_p}$
  6. Update the conjugate direction $d^{(k+1)} = r^{(k+1)} + β_kd^{(k)}$
  7. Update $Y^{(k+1)} = -Z + β_k Y^{(k)}$

Note that the right hand side of Step 7 is the same as evaluating $\mathcal A[d^{(k+1)}]$, but avoids the actual evaluation

Input

  • TpM the TangentSpace as the domain
  • A a symmetric linear operator on the tangent space (M, p, X) -> Y
  • b a vector field on the tangent space (M, p) -> X
  • X the initial tangent vector

Keyword arguments

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.conjugate_residual!Function
conjugate_residual(TpM::TangentSpace, A, b, X=zero_vector(TpM))
 conjugate_residual(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X=zero_vector(TpM))
 conjugate_residual!(TpM::TangentSpace, A, b, X)
-conjugate_residual!(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)

Compute the solution of $\mathcal A(p)[X] + b(p) = 0_p$, where

  • $\mathcal A$ is a linear, symmetric operator on $T_{p}\mathcal M$
  • $b$ is a vector field on the manifold
  • $X ∈ T_{p}\mathcal M$ is a tangent vector
  • $0_p$ is the zero vector $T_{p}\mathcal M$.

This implementation follows Algorithm 3 in [LY24] and is initalised with $X^{(0)}$ as the zero vector and

  • the initial residual $r^{(0)} = -b(p) - \mathcal A(p)[X^{(0)}]$
  • the initial conjugate direction $d^{(0)} = r^{(0)}$
  • initialize $Y^{(0)} = \mathcal A(p)[X^{(0)}]$

performed the following steps at iteration $k=0,…$ until the stopping_criterion is fulfilled.

  1. compute a step size $α_k = \displaystyle\frac{⟨ r^{(k)}, \mathcal A(p)[r^{(k)}] ⟩_p}{⟨ \mathcal A(p)[d^{(k)}], \mathcal A(p)[d^{(k)}] ⟩_p}$
  2. do a step $X^{(k+1)} = X^{(k)} + α_kd^{(k)}$
  3. update the residual $r^{(k+1)} = r^{(k)} + α_k Y^{(k)}$
  4. compute $Z = \mathcal A(p)[r^{(k+1)}]$
  5. Update the conjugate coefficient $β_k = \displaystyle\frac{⟨ r^{(k+1)}, \mathcal A(p)[r^{(k+1)}] ⟩_p}{⟨ r^{(k)}, \mathcal A(p)[r^{(k)}] ⟩_p}$
  6. Update the conjugate direction $d^{(k+1)} = r^{(k+1)} + β_kd^{(k)}$
  7. Update $Y^{(k+1)} = -Z + β_k Y^{(k)}$

Note that the right hand side of Step 7 is the same as evaluating $\mathcal A[d^{(k+1)}]$, but avoids the actual evaluation

Input

  • TpM the TangentSpace as the domain
  • A a symmetric linear operator on the tangent space (M, p, X) -> Y
  • b a vector field on the tangent space (M, p) -> X
  • X the initial tangent vector

Keyword arguments

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ConjugateResidualStateType
ConjugateResidualState{T,R,TStop<:StoppingCriterion} <: AbstractManoptSolverState

A state for the conjugate_residual solver.

Fields

  • X::T: the iterate
  • r::T: the residual $r = -b(p) - \mathcal A(p)[X]$
  • d::T: the conjugate direction
  • Ar::T, Ad::T: storages for $\mathcal A(p)[d]$, $\mathcal A(p)[r]$
  • rAr::R: internal field for storing $⟨ r, \mathcal A(p)[r] ⟩$
  • α::R: a step length
  • β::R: the conjugate coefficient
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled

Constructor

ConjugateResidualState(TpM::TangentSpace,slso::SymmetricLinearSystemObjective; kwargs...)

Initialise the state with default values.

Keyword arguments

See also

conjugate_residual

source

Objective

Manopt.SymmetricLinearSystemObjectiveType
SymmetricLinearSystemObjective{E<:AbstractEvaluationType,TA,T} <: AbstractManifoldObjective{E}

Model the objective

\[f(X) = \frac{1}{2} \lVert \mathcal A[X] + b \rVert_{p}^2,\qquad X ∈ T_{p}\mathcal M,\]

defined on the tangent space $T_{p}\mathcal M$ at $p$ on the manifold $\mathcal M$.

In other words this is an objective to solve $\mathcal A = -b(p)$ for some linear symmetric operator and a vector function. Note the minus on the right hand side, which makes this objective especially tailored for (iteratively) solving Newton-like equations.

Fields

  • A!!: a symmetric, linear operator on the tangent space
  • b!!: a gradient function

where A!! can work as an allocating operator (M, p, X) -> Y or an in-place one (M, Y, p, X) -> Y, and similarly b!! can either be a function (M, p) -> X or (M, X, p) -> X. The first variants allocate for the result, the second variants work in-place.

Constructor

SymmetricLinearSystemObjective(A, b; evaluation=AllocatingEvaluation())

Generate the objective specifying whether the two parts work allocating or in-place.

source

Additional stopping criterion

Manopt.StopWhenRelativeResidualLessType
StopWhenRelativeResidualLess <: StoppingCriterion

Stop when re relative residual in the conjugate_residual is below a certain threshold, i.e.

\[\displaystyle\frac{\lVert r^{(k) \rVert_{}}{c} ≤ ε,\]

where $c = \lVert b \rVert_{}$ of the initial vector from the vector field in $\mathcal A(p)[X] + b(p) = 0_p$, from the conjugate_residual

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • c: the initial norm
  • ε: the threshold
  • norm_rk: the last computed norm of the residual

Constructor

StopWhenRelativeResidualLess(c, ε; norm_r = 2*c*ε)

Initialise the stopping criterion.

Note

The initial norm of the vector field $c = \lVert b \rVert_{}$ that is stored internally is updated on initialisation, that is, if this stopping criterion is called with k<=0.

source

Internal functions

Manopt.get_bFunction
get_b(TpM::TangentSpace, slso::SymmetricLinearSystemObjective)

evaluate the stored value for computing the right hand side $b$ in $\mathcal A=-b$.

source

Literature

[LY24]
Z. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.
+conjugate_residual!(TpM::TangentSpace, slso::SymmetricLinearSystemObjective, X)

Compute the solution of $\mathcal A(p)[X] + b(p) = 0_p$, where

  • $\mathcal A$ is a linear, symmetric operator on $T_{p}\mathcal M$
  • $b$ is a vector field on the manifold
  • $X ∈ T_{p}\mathcal M$ is a tangent vector
  • $0_p$ is the zero vector $T_{p}\mathcal M$.

This implementation follows Algorithm 3 in [LY24] and is initalised with $X^{(0)}$ as the zero vector and

  • the initial residual $r^{(0)} = -b(p) - \mathcal A(p)[X^{(0)}]$
  • the initial conjugate direction $d^{(0)} = r^{(0)}$
  • initialize $Y^{(0)} = \mathcal A(p)[X^{(0)}]$

performed the following steps at iteration $k=0,…$ until the stopping_criterion is fulfilled.

  1. compute a step size $α_k = \displaystyle\frac{⟨ r^{(k)}, \mathcal A(p)[r^{(k)}] ⟩_p}{⟨ \mathcal A(p)[d^{(k)}], \mathcal A(p)[d^{(k)}] ⟩_p}$
  2. do a step $X^{(k+1)} = X^{(k)} + α_kd^{(k)}$
  3. update the residual $r^{(k+1)} = r^{(k)} + α_k Y^{(k)}$
  4. compute $Z = \mathcal A(p)[r^{(k+1)}]$
  5. Update the conjugate coefficient $β_k = \displaystyle\frac{⟨ r^{(k+1)}, \mathcal A(p)[r^{(k+1)}] ⟩_p}{⟨ r^{(k)}, \mathcal A(p)[r^{(k)}] ⟩_p}$
  6. Update the conjugate direction $d^{(k+1)} = r^{(k+1)} + β_kd^{(k)}$
  7. Update $Y^{(k+1)} = -Z + β_k Y^{(k)}$

Note that the right hand side of Step 7 is the same as evaluating $\mathcal A[d^{(k+1)}]$, but avoids the actual evaluation

Input

  • TpM the TangentSpace as the domain
  • A a symmetric linear operator on the tangent space (M, p, X) -> Y
  • b a vector field on the tangent space (M, p) -> X
  • X the initial tangent vector

Keyword arguments

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ConjugateResidualStateType
ConjugateResidualState{T,R,TStop<:StoppingCriterion} <: AbstractManoptSolverState

A state for the conjugate_residual solver.

Fields

  • X::T: the iterate
  • r::T: the residual $r = -b(p) - \mathcal A(p)[X]$
  • d::T: the conjugate direction
  • Ar::T, Ad::T: storages for $\mathcal A(p)[d]$, $\mathcal A(p)[r]$
  • rAr::R: internal field for storing $⟨ r, \mathcal A(p)[r] ⟩$
  • α::R: a step length
  • β::R: the conjugate coefficient
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled

Constructor

ConjugateResidualState(TpM::TangentSpace,slso::SymmetricLinearSystemObjective; kwargs...)

Initialise the state with default values.

Keyword arguments

See also

conjugate_residual

source

Objective

Manopt.SymmetricLinearSystemObjectiveType
SymmetricLinearSystemObjective{E<:AbstractEvaluationType,TA,T} <: AbstractManifoldObjective{E}

Model the objective

\[f(X) = \frac{1}{2} \lVert \mathcal A[X] + b \rVert_{p}^2,\qquad X ∈ T_{p}\mathcal M,\]

defined on the tangent space $T_{p}\mathcal M$ at $p$ on the manifold $\mathcal M$.

In other words this is an objective to solve $\mathcal A = -b(p)$ for some linear symmetric operator and a vector function. Note the minus on the right hand side, which makes this objective especially tailored for (iteratively) solving Newton-like equations.

Fields

  • A!!: a symmetric, linear operator on the tangent space
  • b!!: a gradient function

where A!! can work as an allocating operator (M, p, X) -> Y or an in-place one (M, Y, p, X) -> Y, and similarly b!! can either be a function (M, p) -> X or (M, X, p) -> X. The first variants allocate for the result, the second variants work in-place.

Constructor

SymmetricLinearSystemObjective(A, b; evaluation=AllocatingEvaluation())

Generate the objective specifying whether the two parts work allocating or in-place.

source

Additional stopping criterion

Manopt.StopWhenRelativeResidualLessType
StopWhenRelativeResidualLess <: StoppingCriterion

Stop when re relative residual in the conjugate_residual is below a certain threshold, i.e.

\[\displaystyle\frac{\lVert r^{(k) \rVert_{}}{c} ≤ ε,\]

where $c = \lVert b \rVert_{}$ of the initial vector from the vector field in $\mathcal A(p)[X] + b(p) = 0_p$, from the conjugate_residual

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • c: the initial norm
  • ε: the threshold
  • norm_rk: the last computed norm of the residual

Constructor

StopWhenRelativeResidualLess(c, ε; norm_r = 2*c*ε)

Initialise the stopping criterion.

Note

The initial norm of the vector field $c = \lVert b \rVert_{}$ that is stored internally is updated on initialisation, that is, if this stopping criterion is called with k<=0.

source

Internal functions

Manopt.get_bFunction
get_b(TpM::TangentSpace, slso::SymmetricLinearSystemObjective)

evaluate the stored value for computing the right hand side $b$ in $\mathcal A=-b$.

source

Literature

[LY24]
Z. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.
diff --git a/dev/solvers/convex_bundle_method/index.html b/dev/solvers/convex_bundle_method/index.html index eeb6d45d7a..62ea3bb8c4 100644 --- a/dev/solvers/convex_bundle_method/index.html +++ b/dev/solvers/convex_bundle_method/index.html @@ -1,8 +1,8 @@ Convex bundle method · Manopt.jl

Convex bundle method

Manopt.convex_bundle_methodFunction
convex_bundle_method(M, f, ∂f, p)
-convex_bundle_method!(M, f, ∂f, p)

perform a convex bundle method $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(-g_k)$ where

\[g_k = \sum_{j\in J_k} λ_j^k \mathrm{P}_{p_k←q_j}X_{q_j},\]

and $p_k$ is the last serious iterate, $X_{q_j} ∈ ∂f(q_j)$, and the $λ_j^k$ are solutions to the quadratic subproblem provided by the convex_bundle_method_subsolver.

Though the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.

For more details, see [BHJ24].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • ∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • atol_λ=eps() : tolerance parameter for the convex coefficients in $λ$.
  • atol_errors=eps(): : tolerance parameter for the linearization errors.
  • bundle_cap=25`
  • m=1e-3: : the parameter to test the decrease of the cost: $f(q_{k+1}) ≤ f(p_k) + m ξ$.
  • diameter=50.0: estimate for the diameter of the level set of the objective function at the starting point.
  • domain=(M, p) -> isfinite(f(M, p)): a function to that evaluates to true when the current candidate is in the domain of the objective f, and false otherwise.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • k_max=0: upper bound on the sectional curvature of the manifold.
  • stepsize=default_stepsize(M, ConvexBundleMethodState): a functor inheriting from Stepsize to determine a step size
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses* inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • stopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • sub_state=convex_bundle_method_subsolver`: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_problem=AllocatingEvaluation: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.convex_bundle_method!Function
convex_bundle_method(M, f, ∂f, p)
-convex_bundle_method!(M, f, ∂f, p)

perform a convex bundle method $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(-g_k)$ where

\[g_k = \sum_{j\in J_k} λ_j^k \mathrm{P}_{p_k←q_j}X_{q_j},\]

and $p_k$ is the last serious iterate, $X_{q_j} ∈ ∂f(q_j)$, and the $λ_j^k$ are solutions to the quadratic subproblem provided by the convex_bundle_method_subsolver.

Though the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.

For more details, see [BHJ24].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • ∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • atol_λ=eps() : tolerance parameter for the convex coefficients in $λ$.
  • atol_errors=eps(): : tolerance parameter for the linearization errors.
  • bundle_cap=25`
  • m=1e-3: : the parameter to test the decrease of the cost: $f(q_{k+1}) ≤ f(p_k) + m ξ$.
  • diameter=50.0: estimate for the diameter of the level set of the objective function at the starting point.
  • domain=(M, p) -> isfinite(f(M, p)): a function to that evaluates to true when the current candidate is in the domain of the objective f, and false otherwise.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • k_max=0: upper bound on the sectional curvature of the manifold.
  • stepsize=default_stepsize(M, ConvexBundleMethodState): a functor inheriting from Stepsize to determine a step size
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses* inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • stopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • sub_state=convex_bundle_method_subsolver`: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_problem=AllocatingEvaluation: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ConvexBundleMethodStateType
ConvexBundleMethodState <: AbstractManoptSolverState

Stores option values for a convex_bundle_method solver.

Fields

THe following fields require a (real) number type R, as well as point type P and a tangent vector type T`

  • atol_λ::R: tolerance parameter for the convex coefficients in λ
  • `atol_errors::R: tolerance parameter for the linearization errors
  • bundle<:AbstractVector{Tuple{<:P,<:T}}: bundle that collects each iterate with the computed subgradient at the iterate
  • bundle_cap::Int: the maximal number of elements the bundle is allowed to remember
  • diameter::R: estimate for the diameter of the level set of the objective function at the starting point
  • domain: the domain offas a function(M,p) -> bthat evaluates to true when the current candidate is in the domain off`, and false otherwise,
  • g::T: descent direction
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • k_max::R: upper bound on the sectional curvature of the manifold
  • linearization_errors<:AbstractVector{<:R}: linearization errors at the last serious step
  • m::R: the parameter to test the decrease of the cost: $f(q_{k+1}) ≤ f(p_k) + m ξ$.
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • p_last_serious::P: last serious iterate
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • transported_subgradients: subgradients of the bundle that are transported to p_last_serious
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing a subgradient at the current iterate
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • ε::R: convex combination of the linearization errors
  • λ:::AbstractVector{<:R}: convex coefficients from the slution of the subproblem
  • ξ: the stopping parameter given by $ξ = -\lVert g\rvert^2 – ε$
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Constructor

ConvexBundleMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)
-ConvexBundleMethodState(M::AbstractManifold, sub_problem=convex_bundle_method_subsolver; evaluation=AllocatingEvaluation(), kwargs...)

Generate the state for the convex_bundle_method on the manifold M

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • sub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Keyword arguments

Most of the following keyword arguments set default values for the fields mentioned before.

source

Stopping criteria

Manopt.StopWhenLagrangeMultiplierLessType
StopWhenLagrangeMultiplierLess <: StoppingCriterion

Stopping Criteria for Lagrange multipliers.

Currently these are meant for the convex_bundle_method and proximal_bundle_method, where based on the Lagrange multipliers an approximate (sub)gradient $g$ and an error estimate $ε$ is computed.

The mode=:both requires that both $ε$ and $\lvert g \rvert$ are smaller than their tolerances for the convex_bundle_method, and that $c$ and $\lvert d \rvert$ are smaller than their tolerances for the proximal_bundle_method.

The mode=:estimate requires that, for the convex_bundle_method $-ξ = \lvert g \rvert^2 + ε$ is less than a given tolerance. For the proximal_bundle_method, the equation reads $-ν = μ \lvert d \rvert^2 + c$.

Constructors

StopWhenLagrangeMultiplierLess(tolerance=1e-6; mode::Symbol=:estimate, names=nothing)

Create the stopping criterion for one of the modes mentioned. Note that tolerance can be a single number for the :estimate case, but a vector of two values is required for the :both mode. Here the first entry specifies the tolerance for $ε$ ($c$), the second the tolerance for $\lvert g \rvert$ ($\lvert d \rvert$), respectively.

source

Debug functions

Manopt.DebugWarnIfLagrangeMultiplierIncreasesType
DebugWarnIfLagrangeMultiplierIncreases <: DebugAction

print a warning if the Lagrange parameter based value $-ξ$ of the bundle method increases.

Constructor

DebugWarnIfLagrangeMultiplierIncreases(warn=:Once; tol=1e2)

Initialize the warning to warning level (:Once) and introduce a tolerance for the test of 1e2.

The warn level can be set to :Once to only warn the first time the cost increases, to :Always to report an increase every time it happens, and it can be set to :No to deactivate the warning, then this DebugAction is inactive. All other symbols are handled as if they were :Always:

source

Helpers and internal functions

Manopt.convex_bundle_method_subsolverFunction
λ = convex_bundle_method_subsolver(M, p_last_serious, linearization_errors, transported_subgradients)
+convex_bundle_method!(M, f, ∂f, p)

perform a convex bundle method $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(-g_k)$ where

\[g_k = \sum_{j\in J_k} λ_j^k \mathrm{P}_{p_k←q_j}X_{q_j},\]

and $p_k$ is the last serious iterate, $X_{q_j} ∈ ∂f(q_j)$, and the $λ_j^k$ are solutions to the quadratic subproblem provided by the convex_bundle_method_subsolver.

Though the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.

For more details, see [BHJ24].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • ∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • atol_λ=eps() : tolerance parameter for the convex coefficients in $λ$.
  • atol_errors=eps(): : tolerance parameter for the linearization errors.
  • bundle_cap=25`
  • m=1e-3: : the parameter to test the decrease of the cost: $f(q_{k+1}) ≤ f(p_k) + m ξ$.
  • diameter=50.0: estimate for the diameter of the level set of the objective function at the starting point.
  • domain=(M, p) -> isfinite(f(M, p)): a function to that evaluates to true when the current candidate is in the domain of the objective f, and false otherwise.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • k_max=0: upper bound on the sectional curvature of the manifold.
  • stepsize=default_stepsize(M, ConvexBundleMethodState): a functor inheriting from Stepsize to determine a step size
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses* inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • stopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • sub_state=convex_bundle_method_subsolver`: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_problem=AllocatingEvaluation: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.convex_bundle_method!Function
convex_bundle_method(M, f, ∂f, p)
+convex_bundle_method!(M, f, ∂f, p)

perform a convex bundle method $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(-g_k)$ where

\[g_k = \sum_{j\in J_k} λ_j^k \mathrm{P}_{p_k←q_j}X_{q_j},\]

and $p_k$ is the last serious iterate, $X_{q_j} ∈ ∂f(q_j)$, and the $λ_j^k$ are solutions to the quadratic subproblem provided by the convex_bundle_method_subsolver.

Though the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.

For more details, see [BHJ24].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • ∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • atol_λ=eps() : tolerance parameter for the convex coefficients in $λ$.
  • atol_errors=eps(): : tolerance parameter for the linearization errors.
  • bundle_cap=25`
  • m=1e-3: : the parameter to test the decrease of the cost: $f(q_{k+1}) ≤ f(p_k) + m ξ$.
  • diameter=50.0: estimate for the diameter of the level set of the objective function at the starting point.
  • domain=(M, p) -> isfinite(f(M, p)): a function to that evaluates to true when the current candidate is in the domain of the objective f, and false otherwise.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • k_max=0: upper bound on the sectional curvature of the manifold.
  • stepsize=default_stepsize(M, ConvexBundleMethodState): a functor inheriting from Stepsize to determine a step size
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses* inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • stopping_criterion=StopWhenLagrangeMultiplierLess(1e-8)|StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • sub_state=convex_bundle_method_subsolver`: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_problem=AllocatingEvaluation: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ConvexBundleMethodStateType
ConvexBundleMethodState <: AbstractManoptSolverState

Stores option values for a convex_bundle_method solver.

Fields

THe following fields require a (real) number type R, as well as point type P and a tangent vector type T`

  • atol_λ::R: tolerance parameter for the convex coefficients in λ
  • `atol_errors::R: tolerance parameter for the linearization errors
  • bundle<:AbstractVector{Tuple{<:P,<:T}}: bundle that collects each iterate with the computed subgradient at the iterate
  • bundle_cap::Int: the maximal number of elements the bundle is allowed to remember
  • diameter::R: estimate for the diameter of the level set of the objective function at the starting point
  • domain: the domain offas a function(M,p) -> bthat evaluates to true when the current candidate is in the domain off`, and false otherwise,
  • g::T: descent direction
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • k_max::R: upper bound on the sectional curvature of the manifold
  • linearization_errors<:AbstractVector{<:R}: linearization errors at the last serious step
  • m::R: the parameter to test the decrease of the cost: $f(q_{k+1}) ≤ f(p_k) + m ξ$.
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • p_last_serious::P: last serious iterate
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • transported_subgradients: subgradients of the bundle that are transported to p_last_serious
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing a subgradient at the current iterate
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • ε::R: convex combination of the linearization errors
  • λ:::AbstractVector{<:R}: convex coefficients from the slution of the subproblem
  • ξ: the stopping parameter given by $ξ = -\lVert g\rvert^2 – ε$
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Constructor

ConvexBundleMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)
+ConvexBundleMethodState(M::AbstractManifold, sub_problem=convex_bundle_method_subsolver; evaluation=AllocatingEvaluation(), kwargs...)

Generate the state for the convex_bundle_method on the manifold M

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • sub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Keyword arguments

Most of the following keyword arguments set default values for the fields mentioned before.

source

Stopping criteria

Manopt.StopWhenLagrangeMultiplierLessType
StopWhenLagrangeMultiplierLess <: StoppingCriterion

Stopping Criteria for Lagrange multipliers.

Currently these are meant for the convex_bundle_method and proximal_bundle_method, where based on the Lagrange multipliers an approximate (sub)gradient $g$ and an error estimate $ε$ is computed.

The mode=:both requires that both $ε$ and $\lvert g \rvert$ are smaller than their tolerances for the convex_bundle_method, and that $c$ and $\lvert d \rvert$ are smaller than their tolerances for the proximal_bundle_method.

The mode=:estimate requires that, for the convex_bundle_method $-ξ = \lvert g \rvert^2 + ε$ is less than a given tolerance. For the proximal_bundle_method, the equation reads $-ν = μ \lvert d \rvert^2 + c$.

Constructors

StopWhenLagrangeMultiplierLess(tolerance=1e-6; mode::Symbol=:estimate, names=nothing)

Create the stopping criterion for one of the modes mentioned. Note that tolerance can be a single number for the :estimate case, but a vector of two values is required for the :both mode. Here the first entry specifies the tolerance for $ε$ ($c$), the second the tolerance for $\lvert g \rvert$ ($\lvert d \rvert$), respectively.

source

Debug functions

Manopt.DebugWarnIfLagrangeMultiplierIncreasesType
DebugWarnIfLagrangeMultiplierIncreases <: DebugAction

print a warning if the Lagrange parameter based value $-ξ$ of the bundle method increases.

Constructor

DebugWarnIfLagrangeMultiplierIncreases(warn=:Once; tol=1e2)

Initialize the warning to warning level (:Once) and introduce a tolerance for the test of 1e2.

The warn level can be set to :Once to only warn the first time the cost increases, to :Always to report an increase every time it happens, and it can be set to :No to deactivate the warning, then this DebugAction is inactive. All other symbols are handled as if they were :Always:

source

Helpers and internal functions

Manopt.convex_bundle_method_subsolverFunction
λ = convex_bundle_method_subsolver(M, p_last_serious, linearization_errors, transported_subgradients)
 convex_bundle_method_subsolver!(M, λ, p_last_serious, linearization_errors, transported_subgradients)

solver for the subproblem of the convex bundle method at the last serious iterate $p_k$ given the current linearization errors $c_j^k$, and transported subgradients $\mathrm{P}_{p_k←q_j} X_{q_j}$.

The computation can also be done in-place of λ.

The subproblem for the convex bundle method is

\[\begin{align*} \operatorname*{arg\,min}_{λ ∈ ℝ^{\lvert J_k\rvert}}& \frac{1}{2} \Bigl\lVert \sum_{j ∈ J_k} λ_j \mathrm{P}_{p_k←q_j} X_{q_j} \Bigr\rVert^2 @@ -13,4 +13,4 @@ \quad λ_j ≥ 0 \quad \text{for all } j ∈ J_k, -\end{align*}\]

where $J_k = \{j ∈ J_{k-1} \ | \ λ_j > 0\} \cup \{k\}$. See [BHJ24] for more details

Tip

A default subsolver based on RipQP.jl and QuadraticModels is available if these two packages are loaded.

source
Manopt.DomainBackTrackingStepsizeType
DomainBackTrackingStepsize <: Stepsize

Implement a backtrack as long as we are $q = \operatorname{retr}_p(X)$ yields a point closer to $p$ than $\lVert X \rVert_p$ or $q$ is not on the domain. For the domain this step size requires a ConvexBundleMethodState

source

Literature

[BHJ24]
R. Bergmann, R. Herzog and H. Jasa. The Riemannian Convex Bundle Method, preprint (2024), arXiv:2402.13670.
+\end{align*}\]

where $J_k = \{j ∈ J_{k-1} \ | \ λ_j > 0\} \cup \{k\}$. See [BHJ24] for more details

Tip

A default subsolver based on RipQP.jl and QuadraticModels is available if these two packages are loaded.

source
Manopt.DomainBackTrackingStepsizeType
DomainBackTrackingStepsize <: Stepsize

Implement a backtrack as long as we are $q = \operatorname{retr}_p(X)$ yields a point closer to $p$ than $\lVert X \rVert_p$ or $q$ is not on the domain. For the domain this step size requires a ConvexBundleMethodState

source

Literature

[BHJ24]
R. Bergmann, R. Herzog and H. Jasa. The Riemannian Convex Bundle Method, preprint (2024), arXiv:2402.13670.
diff --git a/dev/solvers/cyclic_proximal_point/index.html b/dev/solvers/cyclic_proximal_point/index.html index ba5cb934f1..d30b173572 100644 --- a/dev/solvers/cyclic_proximal_point/index.html +++ b/dev/solvers/cyclic_proximal_point/index.html @@ -2,7 +2,7 @@ Cyclic Proximal Point · Manopt.jl

Cyclic proximal point

The Cyclic Proximal Point (CPP) algorithm aims to minimize

\[F(x) = \sum_{i=1}^c f_i(x)\]

assuming that the proximal maps $\operatorname{prox}_{λ f_i}(x)$ are given in closed form or can be computed efficiently (at least approximately).

The algorithm then cycles through these proximal maps, where the type of cycle might differ and the proximal parameter $λ_k$ changes after each cycle $k$.

For a convergence result on Hadamard manifolds see Bačák [Bac14].

Manopt.cyclic_proximal_pointFunction
cyclic_proximal_point(M, f, proxes_f, p; kwargs...)
 cyclic_proximal_point(M, mpo, p; kwargs...)
 cyclic_proximal_point!(M, f, proxes_f; kwargs...)
-cyclic_proximal_point!(M, mpo; kwargs...)

perform a cyclic proximal point algorithm. This can be done in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ℝ$ to minimize
  • proxes_f: an Array of proximal maps (Functions) (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of $f$ (see evaluation)

where f and the proximal maps proxes_f can also be given directly as a ManifoldProximalMapObjective mpo

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • evaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Random) or the default linear one.
  • λ=iter -> 1/iter: a function returning the (square summable but not summable) sequence of $λ_i$
  • stopping_criterion=StopAfterIteration(5000)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.cyclic_proximal_point!Function
cyclic_proximal_point(M, f, proxes_f, p; kwargs...)
+cyclic_proximal_point!(M, mpo; kwargs...)

perform a cyclic proximal point algorithm. This can be done in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ℝ$ to minimize
  • proxes_f: an Array of proximal maps (Functions) (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of $f$ (see evaluation)

where f and the proximal maps proxes_f can also be given directly as a ManifoldProximalMapObjective mpo

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • evaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Random) or the default linear one.
  • λ=iter -> 1/iter: a function returning the (square summable but not summable) sequence of $λ_i$
  • stopping_criterion=StopAfterIteration(5000)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.cyclic_proximal_point!Function
cyclic_proximal_point(M, f, proxes_f, p; kwargs...)
 cyclic_proximal_point(M, mpo, p; kwargs...)
 cyclic_proximal_point!(M, f, proxes_f; kwargs...)
-cyclic_proximal_point!(M, mpo; kwargs...)

perform a cyclic proximal point algorithm. This can be done in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ℝ$ to minimize
  • proxes_f: an Array of proximal maps (Functions) (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of $f$ (see evaluation)

where f and the proximal maps proxes_f can also be given directly as a ManifoldProximalMapObjective mpo

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • evaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Random) or the default linear one.
  • λ=iter -> 1/iter: a function returning the (square summable but not summable) sequence of $λ_i$
  • stopping_criterion=StopAfterIteration(5000)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

Technical details

The cyclic_proximal_point solver requires no additional functions to be available for your manifold, besides the ones you use in the proximal maps.

By default, one of the stopping criteria is StopWhenChangeLess, which either requires

  • An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for $\mathcal N$) does not have to be specified or the distance(M, p, q) for said default inverse retraction.

State

Manopt.CyclicProximalPointStateType
CyclicProximalPointState <: AbstractManoptSolverState

stores options for the cyclic_proximal_point algorithm. These are the

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • λ: a function for the values of $λ_k$ per iteration(cycle $k$
  • oder_type: whether to use a randomly permuted sequence (:FixedRandomOrder), a per cycle permuted sequence (:RandomOrder) or the default linear one.

Constructor

CyclicProximalPointState(M::AbstractManifold; kwargs...)

Generate the options

Input

Keyword arguments

  • evaluation_order=:LinearOrder: soecify the order_type
  • λ=i -> 1.0 / i a function to compute the $λ_k, k ∈ \mathcal N$,
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • stopping_criterion=StopAfterIteration(2000): a functor indicating that the stopping criterion is fulfilled

See also

cyclic_proximal_point

source

Debug functions

Record functions

Literature

[Bac14]
M. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.
+cyclic_proximal_point!(M, mpo; kwargs...)

perform a cyclic proximal point algorithm. This can be done in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ℝ$ to minimize
  • proxes_f: an Array of proximal maps (Functions) (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of $f$ (see evaluation)

where f and the proximal maps proxes_f can also be given directly as a ManifoldProximalMapObjective mpo

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • evaluation_order=:Linear: whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Random) or the default linear one.
  • λ=iter -> 1/iter: a function returning the (square summable but not summable) sequence of $λ_i$
  • stopping_criterion=StopAfterIteration(5000)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

Technical details

The cyclic_proximal_point solver requires no additional functions to be available for your manifold, besides the ones you use in the proximal maps.

By default, one of the stopping criteria is StopWhenChangeLess, which either requires

  • An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for $\mathcal N$) does not have to be specified or the distance(M, p, q) for said default inverse retraction.

State

Manopt.CyclicProximalPointStateType
CyclicProximalPointState <: AbstractManoptSolverState

stores options for the cyclic_proximal_point algorithm. These are the

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • λ: a function for the values of $λ_k$ per iteration(cycle $k$
  • oder_type: whether to use a randomly permuted sequence (:FixedRandomOrder), a per cycle permuted sequence (:RandomOrder) or the default linear one.

Constructor

CyclicProximalPointState(M::AbstractManifold; kwargs...)

Generate the options

Input

Keyword arguments

  • evaluation_order=:LinearOrder: soecify the order_type
  • λ=i -> 1.0 / i a function to compute the $λ_k, k ∈ \mathcal N$,
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • stopping_criterion=StopAfterIteration(2000): a functor indicating that the stopping criterion is fulfilled

See also

cyclic_proximal_point

source

Debug functions

Record functions

Literature

[Bac14]
M. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.
diff --git a/dev/solvers/difference_of_convex/index.html b/dev/solvers/difference_of_convex/index.html index 8e3f0ee07d..27e83eb494 100644 --- a/dev/solvers/difference_of_convex/index.html +++ b/dev/solvers/difference_of_convex/index.html @@ -2,19 +2,19 @@ Difference of Convex · Manopt.jl

Difference of convex

Difference of convex algorithm

Manopt.difference_of_convex_algorithmFunction
difference_of_convex_algorithm(M, f, g, ∂h, p=rand(M); kwargs...)
 difference_of_convex_algorithm(M, mdco, p; kwargs...)
 difference_of_convex_algorithm!(M, f, g, ∂h, p; kwargs...)
-difference_of_convex_algorithm!(M, mdco, p; kwargs...)

Compute the difference of convex algorithm [BFSS23] to minimize

\[ \operatorname{arg\,min}_{p∈\mathcal M}\ g(p) - h(p)\]

where you need to provide $f(p) = g(p) - h(p)$, $g$ and the subdifferential $∂h$ of $h$.

This algorithm performs the following steps given a start point p= $p^{(0)}$. Then repeat for $k=0,1,…$

  1. Take $X^{(k)} ∈ ∂h(p^{(k)})$
  2. Set the next iterate to the solution of the subproblem

\[ p^{(k+1)} ∈ \operatorname{arg\,min}_{q ∈ \mathcal M} g(q) - ⟨X^{(k)}, \log_{p^{(k)}}q⟩\]

until the stopping criterion (see the stopping_criterion keyword is fulfilled.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • gradient=nothing: specify $\operatorname{grad} f$, for debug / analysis or enhancing the stopping_criterion=
  • grad_g=nothing: specify the gradient of g. If specified, a subsolver is automatically set up.
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8): a functor indicating that the stopping criterion is fulfilled
  • g=nothing: specify the function g If specified, a subsolver is automatically set up.
  • sub_cost=LinearizedDCCost(g, p, initial_vector): a cost to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_grad=LinearizedDCGrad(grad_g, p, initial_vector; evaluation=evaluation): gradient to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_stopping_criterion=StopAfterIteration(300)|StopWhenStepsizeLess(1e-9)|StopWhenGradientNormLess(1e-9): a stopping criterion used withing the default sub_state= This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • sub_stepsize=ArmijoLinesearch(M)) specify a step size used within the sub_state. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.difference_of_convex_algorithm!Function
difference_of_convex_algorithm(M, f, g, ∂h, p=rand(M); kwargs...)
+difference_of_convex_algorithm!(M, mdco, p; kwargs...)

Compute the difference of convex algorithm [BFSS23] to minimize

\[ \operatorname{arg\,min}_{p∈\mathcal M}\ g(p) - h(p)\]

where you need to provide $f(p) = g(p) - h(p)$, $g$ and the subdifferential $∂h$ of $h$.

This algorithm performs the following steps given a start point p= $p^{(0)}$. Then repeat for $k=0,1,…$

  1. Take $X^{(k)} ∈ ∂h(p^{(k)})$
  2. Set the next iterate to the solution of the subproblem

\[ p^{(k+1)} ∈ \operatorname{arg\,min}_{q ∈ \mathcal M} g(q) - ⟨X^{(k)}, \log_{p^{(k)}}q⟩\]

until the stopping criterion (see the stopping_criterion keyword is fulfilled.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • gradient=nothing: specify $\operatorname{grad} f$, for debug / analysis or enhancing the stopping_criterion=
  • grad_g=nothing: specify the gradient of g. If specified, a subsolver is automatically set up.
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8): a functor indicating that the stopping criterion is fulfilled
  • g=nothing: specify the function g If specified, a subsolver is automatically set up.
  • sub_cost=LinearizedDCCost(g, p, initial_vector): a cost to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_grad=LinearizedDCGrad(grad_g, p, initial_vector; evaluation=evaluation): gradient to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_stopping_criterion=StopAfterIteration(300)|StopWhenStepsizeLess(1e-9)|StopWhenGradientNormLess(1e-9): a stopping criterion used withing the default sub_state= This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • sub_stepsize=ArmijoLinesearch(M)) specify a step size used within the sub_state. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.difference_of_convex_algorithm!Function
difference_of_convex_algorithm(M, f, g, ∂h, p=rand(M); kwargs...)
 difference_of_convex_algorithm(M, mdco, p; kwargs...)
 difference_of_convex_algorithm!(M, f, g, ∂h, p; kwargs...)
-difference_of_convex_algorithm!(M, mdco, p; kwargs...)

Compute the difference of convex algorithm [BFSS23] to minimize

\[ \operatorname{arg\,min}_{p∈\mathcal M}\ g(p) - h(p)\]

where you need to provide $f(p) = g(p) - h(p)$, $g$ and the subdifferential $∂h$ of $h$.

This algorithm performs the following steps given a start point p= $p^{(0)}$. Then repeat for $k=0,1,…$

  1. Take $X^{(k)} ∈ ∂h(p^{(k)})$
  2. Set the next iterate to the solution of the subproblem

\[ p^{(k+1)} ∈ \operatorname{arg\,min}_{q ∈ \mathcal M} g(q) - ⟨X^{(k)}, \log_{p^{(k)}}q⟩\]

until the stopping criterion (see the stopping_criterion keyword is fulfilled.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • gradient=nothing: specify $\operatorname{grad} f$, for debug / analysis or enhancing the stopping_criterion=
  • grad_g=nothing: specify the gradient of g. If specified, a subsolver is automatically set up.
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8): a functor indicating that the stopping criterion is fulfilled
  • g=nothing: specify the function g If specified, a subsolver is automatically set up.
  • sub_cost=LinearizedDCCost(g, p, initial_vector): a cost to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_grad=LinearizedDCGrad(grad_g, p, initial_vector; evaluation=evaluation): gradient to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_stopping_criterion=StopAfterIteration(300)|StopWhenStepsizeLess(1e-9)|StopWhenGradientNormLess(1e-9): a stopping criterion used withing the default sub_state= This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • sub_stepsize=ArmijoLinesearch(M)) specify a step size used within the sub_state. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

Difference of convex proximal point

Manopt.difference_of_convex_proximal_pointFunction
difference_of_convex_proximal_point(M, grad_h, p=rand(M); kwargs...)
+difference_of_convex_algorithm!(M, mdco, p; kwargs...)

Compute the difference of convex algorithm [BFSS23] to minimize

\[ \operatorname{arg\,min}_{p∈\mathcal M}\ g(p) - h(p)\]

where you need to provide $f(p) = g(p) - h(p)$, $g$ and the subdifferential $∂h$ of $h$.

This algorithm performs the following steps given a start point p= $p^{(0)}$. Then repeat for $k=0,1,…$

  1. Take $X^{(k)} ∈ ∂h(p^{(k)})$
  2. Set the next iterate to the solution of the subproblem

\[ p^{(k+1)} ∈ \operatorname{arg\,min}_{q ∈ \mathcal M} g(q) - ⟨X^{(k)}, \log_{p^{(k)}}q⟩\]

until the stopping criterion (see the stopping_criterion keyword is fulfilled.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • gradient=nothing: specify $\operatorname{grad} f$, for debug / analysis or enhancing the stopping_criterion=
  • grad_g=nothing: specify the gradient of g. If specified, a subsolver is automatically set up.
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8): a functor indicating that the stopping criterion is fulfilled
  • g=nothing: specify the function g If specified, a subsolver is automatically set up.
  • sub_cost=LinearizedDCCost(g, p, initial_vector): a cost to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_grad=LinearizedDCGrad(grad_g, p, initial_vector; evaluation=evaluation): gradient to be used within the default sub_problem. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_stopping_criterion=StopAfterIteration(300)|StopWhenStepsizeLess(1e-9)|StopWhenGradientNormLess(1e-9): a stopping criterion used withing the default sub_state= This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • sub_stepsize=ArmijoLinesearch(M)) specify a step size used within the sub_state. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

Difference of convex proximal point

Manopt.difference_of_convex_proximal_pointFunction
difference_of_convex_proximal_point(M, grad_h, p=rand(M); kwargs...)
 difference_of_convex_proximal_point(M, mdcpo, p=rand(M); kwargs...)
 difference_of_convex_proximal_point!(M, grad_h, p; kwargs...)
-difference_of_convex_proximal_point!(M, mdcpo, p; kwargs...)

Compute the difference of convex proximal point algorithm [SO15] to minimize

\[ \operatorname{arg\,min}_{p∈\mathcal M} g(p) - h(p)\]

where you have to provide the subgradient $∂h$ of $h$ and either

  • the proximal map $\operatorname{prox}_{λg}$ of g as a function prox_g(M, λ, p) or prox_g(M, q, λ, p)
  • the functions g and grad_g to compute the proximal map using a sub solver
  • your own sub-solver, specified by sub_problem=and sub_state=

This algorithm performs the following steps given a start point p= $p^{(0)}$. Then repeat for $k=0,1,…$

  1. $X^{(k)} ∈ \operatorname{grad} h(p^{(k)})$
  2. $q^{(k)} = \operatorname{retr}_{p^{(k)}}(λ_kX^{(k)})$
  3. $r^{(k)} = \operatorname{prox}_{λ_kg}(q^{(k)})$
  4. $X^{(k)} = \operatorname{retr}^{-1}_{p^{(k)}}(r^{(k)})$
  5. Compute a stepsize $s_k$ and
  6. set $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(s_kX^{(k)})$.

until the stopping_criterion is fulfilled.

See [ACOO20] for more details on the modified variant, where steps 4-6 are slightly changed, since here the classical proximal point method for DC functions is obtained for $s_k = 1$ and one can hence employ usual line search method.

Keyword arguments

  • λ: ( k -> 1/2 ) a function returning the sequence of prox parameters $λ_k$
  • cost=nothing: provide the cost f, for debug reasons / analysis
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • gradient=nothing: specify $\operatorname{grad} f$, for debug / analysis or enhancing the stopping_criterion
  • prox_g=nothing: specify a proximal map for the sub problem or both of the following
  • g=nothing: specify the function g.
  • grad_g=nothing: specify the gradient of g. If both gand grad_g are specified, a subsolver is automatically set up.
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=ConstantLength(): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8)): a functor indicating that the stopping criterion is fulfilled A StopWhenGradientNormLess(1e-8) is added with |, when a gradient is provided.
  • sub_cost=ProximalDCCost(g, copy(M, p), λ(1))): cost to be used within the default sub_problem that is initialized as soon as g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_grad=ProximalDCGrad(grad_g, copy(M, p), λ(1); evaluation=evaluation): gradient to be used within the default sub_problem, that is initialized as soon as grad_g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs.
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_stopping_criterion=(StopAfterIteration(300)|[StopWhenGradientNormLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.difference_of_convex_proximal_point!Function
difference_of_convex_proximal_point(M, grad_h, p=rand(M); kwargs...)
+difference_of_convex_proximal_point!(M, mdcpo, p; kwargs...)

Compute the difference of convex proximal point algorithm [SO15] to minimize

\[ \operatorname{arg\,min}_{p∈\mathcal M} g(p) - h(p)\]

where you have to provide the subgradient $∂h$ of $h$ and either

  • the proximal map $\operatorname{prox}_{λg}$ of g as a function prox_g(M, λ, p) or prox_g(M, q, λ, p)
  • the functions g and grad_g to compute the proximal map using a sub solver
  • your own sub-solver, specified by sub_problem=and sub_state=

This algorithm performs the following steps given a start point p= $p^{(0)}$. Then repeat for $k=0,1,…$

  1. $X^{(k)} ∈ \operatorname{grad} h(p^{(k)})$
  2. $q^{(k)} = \operatorname{retr}_{p^{(k)}}(λ_kX^{(k)})$
  3. $r^{(k)} = \operatorname{prox}_{λ_kg}(q^{(k)})$
  4. $X^{(k)} = \operatorname{retr}^{-1}_{p^{(k)}}(r^{(k)})$
  5. Compute a stepsize $s_k$ and
  6. set $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(s_kX^{(k)})$.

until the stopping_criterion is fulfilled.

See [ACOO20] for more details on the modified variant, where steps 4-6 are slightly changed, since here the classical proximal point method for DC functions is obtained for $s_k = 1$ and one can hence employ usual line search method.

Keyword arguments

  • λ: ( k -> 1/2 ) a function returning the sequence of prox parameters $λ_k$
  • cost=nothing: provide the cost f, for debug reasons / analysis
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • gradient=nothing: specify $\operatorname{grad} f$, for debug / analysis or enhancing the stopping_criterion
  • prox_g=nothing: specify a proximal map for the sub problem or both of the following
  • g=nothing: specify the function g.
  • grad_g=nothing: specify the gradient of g. If both gand grad_g are specified, a subsolver is automatically set up.
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=ConstantLength(): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8)): a functor indicating that the stopping criterion is fulfilled A StopWhenGradientNormLess(1e-8) is added with |, when a gradient is provided.
  • sub_cost=ProximalDCCost(g, copy(M, p), λ(1))): cost to be used within the default sub_problem that is initialized as soon as g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_grad=ProximalDCGrad(grad_g, copy(M, p), λ(1); evaluation=evaluation): gradient to be used within the default sub_problem, that is initialized as soon as grad_g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs.
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_stopping_criterion=(StopAfterIteration(300)|[StopWhenGradientNormLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.difference_of_convex_proximal_point!Function
difference_of_convex_proximal_point(M, grad_h, p=rand(M); kwargs...)
 difference_of_convex_proximal_point(M, mdcpo, p=rand(M); kwargs...)
 difference_of_convex_proximal_point!(M, grad_h, p; kwargs...)
-difference_of_convex_proximal_point!(M, mdcpo, p; kwargs...)

Compute the difference of convex proximal point algorithm [SO15] to minimize

\[ \operatorname{arg\,min}_{p∈\mathcal M} g(p) - h(p)\]

where you have to provide the subgradient $∂h$ of $h$ and either

  • the proximal map $\operatorname{prox}_{λg}$ of g as a function prox_g(M, λ, p) or prox_g(M, q, λ, p)
  • the functions g and grad_g to compute the proximal map using a sub solver
  • your own sub-solver, specified by sub_problem=and sub_state=

This algorithm performs the following steps given a start point p= $p^{(0)}$. Then repeat for $k=0,1,…$

  1. $X^{(k)} ∈ \operatorname{grad} h(p^{(k)})$
  2. $q^{(k)} = \operatorname{retr}_{p^{(k)}}(λ_kX^{(k)})$
  3. $r^{(k)} = \operatorname{prox}_{λ_kg}(q^{(k)})$
  4. $X^{(k)} = \operatorname{retr}^{-1}_{p^{(k)}}(r^{(k)})$
  5. Compute a stepsize $s_k$ and
  6. set $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(s_kX^{(k)})$.

until the stopping_criterion is fulfilled.

See [ACOO20] for more details on the modified variant, where steps 4-6 are slightly changed, since here the classical proximal point method for DC functions is obtained for $s_k = 1$ and one can hence employ usual line search method.

Keyword arguments

  • λ: ( k -> 1/2 ) a function returning the sequence of prox parameters $λ_k$
  • cost=nothing: provide the cost f, for debug reasons / analysis
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • gradient=nothing: specify $\operatorname{grad} f$, for debug / analysis or enhancing the stopping_criterion
  • prox_g=nothing: specify a proximal map for the sub problem or both of the following
  • g=nothing: specify the function g.
  • grad_g=nothing: specify the gradient of g. If both gand grad_g are specified, a subsolver is automatically set up.
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=ConstantLength(): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8)): a functor indicating that the stopping criterion is fulfilled A StopWhenGradientNormLess(1e-8) is added with |, when a gradient is provided.
  • sub_cost=ProximalDCCost(g, copy(M, p), λ(1))): cost to be used within the default sub_problem that is initialized as soon as g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_grad=ProximalDCGrad(grad_g, copy(M, p), λ(1); evaluation=evaluation): gradient to be used within the default sub_problem, that is initialized as soon as grad_g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs.
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_stopping_criterion=(StopAfterIteration(300)|[StopWhenGradientNormLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

Solver states

Manopt.DifferenceOfConvexStateType
DifferenceOfConvexState{Pr,St,P,T,SC<:StoppingCriterion} <:
+difference_of_convex_proximal_point!(M, mdcpo, p; kwargs...)

Compute the difference of convex proximal point algorithm [SO15] to minimize

\[ \operatorname{arg\,min}_{p∈\mathcal M} g(p) - h(p)\]

where you have to provide the subgradient $∂h$ of $h$ and either

  • the proximal map $\operatorname{prox}_{λg}$ of g as a function prox_g(M, λ, p) or prox_g(M, q, λ, p)
  • the functions g and grad_g to compute the proximal map using a sub solver
  • your own sub-solver, specified by sub_problem=and sub_state=

This algorithm performs the following steps given a start point p= $p^{(0)}$. Then repeat for $k=0,1,…$

  1. $X^{(k)} ∈ \operatorname{grad} h(p^{(k)})$
  2. $q^{(k)} = \operatorname{retr}_{p^{(k)}}(λ_kX^{(k)})$
  3. $r^{(k)} = \operatorname{prox}_{λ_kg}(q^{(k)})$
  4. $X^{(k)} = \operatorname{retr}^{-1}_{p^{(k)}}(r^{(k)})$
  5. Compute a stepsize $s_k$ and
  6. set $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(s_kX^{(k)})$.

until the stopping_criterion is fulfilled.

See [ACOO20] for more details on the modified variant, where steps 4-6 are slightly changed, since here the classical proximal point method for DC functions is obtained for $s_k = 1$ and one can hence employ usual line search method.

Keyword arguments

  • λ: ( k -> 1/2 ) a function returning the sequence of prox parameters $λ_k$
  • cost=nothing: provide the cost f, for debug reasons / analysis
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • gradient=nothing: specify $\operatorname{grad} f$, for debug / analysis or enhancing the stopping_criterion
  • prox_g=nothing: specify a proximal map for the sub problem or both of the following
  • g=nothing: specify the function g.
  • grad_g=nothing: specify the gradient of g. If both gand grad_g are specified, a subsolver is automatically set up.
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=ConstantLength(): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-8)): a functor indicating that the stopping criterion is fulfilled A StopWhenGradientNormLess(1e-8) is added with |, when a gradient is provided.
  • sub_cost=ProximalDCCost(g, copy(M, p), λ(1))): cost to be used within the default sub_problem that is initialized as soon as g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_grad=ProximalDCGrad(grad_g, copy(M, p), λ(1); evaluation=evaluation): gradient to be used within the default sub_problem, that is initialized as soon as grad_g is provided. This is used to define the sub_objective= keyword and has hence no effect, if you set sub_objective directly.
  • sub_hess: (a finite difference approximation using sub_grad by default): specify a Hessian of the sub_cost, which the default solver, see sub_state= needs.
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_objective: a gradient or Hessian objective based on sub_cost=, sub_grad=, and sub_hessif provided the objective used within sub_problem. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=(GradientDescentState or TrustRegionsState if sub_hessian is provided): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_stopping_criterion=(StopAfterIteration(300)|[StopWhenGradientNormLess](@ref)(1e-8): a functor indicating that the stopping criterion is fulfilled This is used to define thesubstate=keyword and has hence no effect, if you setsubstate` directly.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

Solver states

Manopt.DifferenceOfConvexStateType
DifferenceOfConvexState{Pr,St,P,T,SC<:StoppingCriterion} <:
            AbstractManoptSolverState

A struct to store the current state of the [difference_of_convex_algorithm])(@ref). It comes in two forms, depending on the realisation of the subproblem.

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing a subgradient at the current iterate
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled

The sub task consists of a method to solve

\[ \operatorname{arg\,min}_{q∈\mathcal M}\ g(p) - ⟨X, \log_p q⟩\]

is needed. Besides a problem and a state, one can also provide a function and an AbstractEvaluationType, respectively, to indicate a closed form solution for the sub task.

Constructors

DifferenceOfConvexState(M, sub_problem, sub_state; kwargs...)
-DifferenceOfConvexState(M, sub_solver; evaluation=InplaceEvaluation(), kwargs...)

Generate the state either using a solver from Manopt, given by an AbstractManoptProblem sub_problem and an AbstractManoptSolverState sub_state, or a closed form solution sub_solver for the sub-problem the function expected to be of the form (M, p, X) -> q or (M, q, p, X) -> q, where by default its AbstractEvaluationType evaluation is in-place of q. Here the elements passed are the current iterate p and the subgradient X of h can be passed to that function.

further keyword arguments

  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • stopping_criterion=StopAfterIteration(200): a functor indicating that the stopping criterion is fulfilled
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector
source
Manopt.DifferenceOfConvexProximalStateType
DifferenceOfConvexProximalState{P, T, Pr, St, S<:Stepsize, SC<:StoppingCriterion, RTR<:AbstractRetractionMethod, ITR<:AbstractInverseRetractionMethod}
+DifferenceOfConvexState(M, sub_solver; evaluation=InplaceEvaluation(), kwargs...)

Generate the state either using a solver from Manopt, given by an AbstractManoptProblem sub_problem and an AbstractManoptSolverState sub_state, or a closed form solution sub_solver for the sub-problem the function expected to be of the form (M, p, X) -> q or (M, q, p, X) -> q, where by default its AbstractEvaluationType evaluation is in-place of q. Here the elements passed are the current iterate p and the subgradient X of h can be passed to that function.

further keyword arguments

  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • stopping_criterion=StopAfterIteration(200): a functor indicating that the stopping criterion is fulfilled
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector
source
Manopt.DifferenceOfConvexProximalStateType
DifferenceOfConvexProximalState{P, T, Pr, St, S<:Stepsize, SC<:StoppingCriterion, RTR<:AbstractRetractionMethod, ITR<:AbstractInverseRetractionMethod}
     <: AbstractSubProblemSolverState

A struct to store the current state of the algorithm as well as the form. It comes in two forms, depending on the realisation of the subproblem.

Fields

  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • q::P: a point on the manifold $\mathcal M$ storing the gradient step
  • r::P: a point on the manifold $\mathcal M$ storing the result of the proximal map
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • X, Y: the current gradient and descent direction, respectively their common type is set by the keyword X
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Constructor

DifferenceOfConvexProximalState(M::AbstractManifold, sub_problem, sub_state; kwargs...)

construct an difference of convex proximal point state

DifferenceOfConvexProximalState(M::AbstractManifold, sub_problem;
-    evaluation=AllocatingEvaluation(), kwargs...

)

construct an difference of convex proximal point state, where sub_problem is a closed form solution with evaluation as type of evaluation.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • sub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Keyword arguments

source

The difference of convex objective

Manopt.ManifoldDifferenceOfConvexObjectiveType
ManifoldDifferenceOfConvexObjective{E} <: AbstractManifoldCostObjective{E}

Specify an objective for a difference_of_convex_algorithm.

The objective $f: \mathcal M → ℝ$ is given as

\[ f(p) = g(p) - h(p)\]

where both $g$ and $h$ are convex, lower semicontinuous and proper. Furthermore the subdifferential $∂h$ of $h$ is required.

Fields

  • cost: an implementation of $f(p) = g(p)-h(p)$ as a function f(M,p).
  • ∂h!!: a deterministic version of $∂h: \mathcal M → T\mathcal M$, in the sense that calling ∂h(M, p) returns a subgradient of $h$ at p and if there is more than one, it returns a deterministic choice.

Note that the subdifferential might be given in two possible signatures

source

as well as for the corresponding sub problem

Manopt.LinearizedDCCostType
LinearizedDCCost

A functor (M,q) → ℝ to represent the inner problem of a ManifoldDifferenceOfConvexObjective. This is a cost function of the form

\[ F_{p_k,X_k}(p) = g(p) - ⟨X_k, \log_{p_k}p⟩\]

for a point p_k and a tangent vector X_k at p_k (for example outer iterates) that are stored within this functor as well.

Fields

  • g a function
  • pk a point on a manifold
  • Xk a tangent vector at pk

Both interim values can be set using set_parameter!(::LinearizedDCCost, ::Val{:p}, p) and set_parameter!(::LinearizedDCCost, ::Val{:X}, X), respectively.

Constructor

LinearizedDCCost(g, p, X)
source
Manopt.LinearizedDCGradType
LinearizedDCGrad

A functor (M,X,p) → ℝ to represent the gradient of the inner problem of a ManifoldDifferenceOfConvexObjective. This is a gradient function of the form

\[ F_{p_k,X_k}(p) = g(p) - ⟨X_k, \log_{p_k}p⟩\]

its gradient is given by using $F=F_1(F_2(p))$, where $F_1(X) = ⟨X_k,X⟩$ and $F_2(p) = \log_{p_k}p$ and the chain rule as well as the adjoint differential of the logarithmic map with respect to its argument for $D^*F_2(p)$

\[ \operatorname{grad} F(q) = \operatorname{grad} f(q) - DF_2^*(q)[X]\]

for a point pk and a tangent vector Xk at pk (the outer iterates) that are stored within this functor as well

Fields

  • grad_g!! the gradient of $g$ (see also LinearizedDCCost)
  • pk a point on a manifold
  • Xk a tangent vector at pk

Both interim values can be set using set_parameter!(::LinearizedDCGrad, ::Val{:p}, p) and set_parameter!(::LinearizedDCGrad, ::Val{:X}, X), respectively.

Constructor

LinearizedDCGrad(grad_g, p, X; evaluation=AllocatingEvaluation())

Where you specify whether grad_g is AllocatingEvaluation or InplaceEvaluation, while this function still provides both signatures.

source
Manopt.ManifoldDifferenceOfConvexProximalObjectiveType
ManifoldDifferenceOfConvexProximalObjective{E} <: Problem

Specify an objective difference_of_convex_proximal_point algorithm. The problem is of the form

\[ \operatorname*{argmin}_{p∈\mathcal M} g(p) - h(p)\]

where both $g$ and $h$ are convex, lower semicontinuous and proper.

Fields

  • cost: implementation of $f(p) = g(p)-h(p)$
  • gradient: the gradient of the cost
  • grad_h!!: a function $\operatorname{grad}h: \mathcal M → T\mathcal M$,

Note that both the gradients might be given in two possible signatures as allocating or in-place.

Constructor

ManifoldDifferenceOfConvexProximalObjective(gradh; cost=nothing, gradient=nothing)

an note that neither cost nor gradient are required for the algorithm, just for eventual debug or stopping criteria.

source

as well as for the corresponding sub problems

Manopt.ProximalDCCostType
ProximalDCCost

A functor (M, p) → ℝ to represent the inner cost function of a ManifoldDifferenceOfConvexProximalObjective. This is the cost function of the proximal map of g.

\[ F_{p_k}(p) = \frac{1}{2λ}d_{\mathcal M}(p_k,p)^2 + g(p)\]

for a point pk and a proximal parameter $λ$.

Fields

  • g - a function
  • pk - a point on a manifold
  • λ - the prox parameter

Both interim values can be set using set_parameter!(::ProximalDCCost, ::Val{:p}, p) and set_parameter!(::ProximalDCCost, ::Val{:λ}, λ), respectively.

Constructor

ProximalDCCost(g, p, λ)
source
Manopt.ProximalDCGradType
ProximalDCGrad

A functor (M,X,p) → ℝ to represent the gradient of the inner cost function of a ManifoldDifferenceOfConvexProximalObjective. This is the gradient function of the proximal map cost function of g. Based on

\[ F_{p_k}(p) = \frac{1}{2λ}d_{\mathcal M}(p_k,p)^2 + g(p)\]

it reads

\[ \operatorname{grad} F_{p_k}(p) = \operatorname{grad} g(p) - \frac{1}{λ}\log_p p_k\]

for a point pk and a proximal parameter λ.

Fields

  • grad_g - a gradient function
  • pk - a point on a manifold
  • λ - the prox parameter

Both interim values can be set using set_parameter!(::ProximalDCGrad, ::Val{:p}, p) and set_parameter!(::ProximalDCGrad, ::Val{:λ}, λ), respectively.

Constructor

ProximalDCGrad(grad_g, pk, λ; evaluation=AllocatingEvaluation())

Where you specify whether grad_g is AllocatingEvaluation or InplaceEvaluation, while this function still always provides both signatures.

source

Helper functions

Manopt.get_subtrahend_gradientFunction
X = get_subtrahend_gradient(amp, q)
-get_subtrahend_gradient!(amp, X, q)

Evaluate the (sub)gradient of the subtrahend h from within a ManifoldDifferenceOfConvexObjective amp at the point q (in place of X).

The evaluation is done in place of X for the !-variant. The T=AllocatingEvaluation problem might still allocate memory within. When the non-mutating variant is called with a T=InplaceEvaluation memory for the result is allocated.

source
X = get_subtrahend_gradient(M::AbstractManifold, dcpo::ManifoldDifferenceOfConvexProximalObjective, p)
-get_subtrahend_gradient!(M::AbstractManifold, X, dcpo::ManifoldDifferenceOfConvexProximalObjective, p)

Evaluate the gradient of the subtrahend $h$ from within a ManifoldDifferenceOfConvexProximalObjectivePat the pointp` (in place of X).

source

Technical details

The difference_of_convex_algorithm and difference_of_convex_proximal_point solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= or retraction_method_dual= (for $\mathcal N$) does not have to be specified.
  • An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for $\mathcal N$) does not have to be specified.

By default, one of the stopping criteria is StopWhenChangeLess, which either requires

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= or retraction_method_dual= (for $\mathcal N$) does not have to be specified.
  • An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for $\mathcal N$) does not have to be specified or the distance(M, p, q) for said default inverse retraction.
  • A `copyto!(M, q, p) and copy(M,p) for points.
  • By default the tangent vector storing the gradient is initialized calling zero_vector(M,p).
  • everything the subsolver requires, which by default is the trust_regions or if you do not provide a Hessian gradient_descent.

Literature

[ACOO20]
Y. T. Almeida, J. X. Cruz Neto, P. R. Oliveira and J. C. Oliveira Souza. A modified proximal point method for DC functions on Hadamard manifolds. Computational Optimization and Applications 76, 649–673 (2020).
[BFSS23]
R. Bergmann, O. P. Ferreira, E. M. Santos and J. C. Souza. The difference of convex algorithm on Hadamard manifolds, arXiv preprint (2023).
[SO15]
J. C. Souza and P. R. Oliveira. A proximal point algorithm for DC fuctions on Hadamard manifolds. Journal of Global Optimization 63, 797–810 (2015).
+ evaluation=AllocatingEvaluation(), kwargs...

)

construct an difference of convex proximal point state, where sub_problem is a closed form solution with evaluation as type of evaluation.

Input

Keyword arguments

source

The difference of convex objective

Manopt.ManifoldDifferenceOfConvexObjectiveType
ManifoldDifferenceOfConvexObjective{E} <: AbstractManifoldCostObjective{E}

Specify an objective for a difference_of_convex_algorithm.

The objective $f: \mathcal M → ℝ$ is given as

\[ f(p) = g(p) - h(p)\]

where both $g$ and $h$ are convex, lower semicontinuous and proper. Furthermore the subdifferential $∂h$ of $h$ is required.

Fields

  • cost: an implementation of $f(p) = g(p)-h(p)$ as a function f(M,p).
  • ∂h!!: a deterministic version of $∂h: \mathcal M → T\mathcal M$, in the sense that calling ∂h(M, p) returns a subgradient of $h$ at p and if there is more than one, it returns a deterministic choice.

Note that the subdifferential might be given in two possible signatures

source

as well as for the corresponding sub problem

Manopt.LinearizedDCCostType
LinearizedDCCost

A functor (M,q) → ℝ to represent the inner problem of a ManifoldDifferenceOfConvexObjective. This is a cost function of the form

\[ F_{p_k,X_k}(p) = g(p) - ⟨X_k, \log_{p_k}p⟩\]

for a point p_k and a tangent vector X_k at p_k (for example outer iterates) that are stored within this functor as well.

Fields

  • g a function
  • pk a point on a manifold
  • Xk a tangent vector at pk

Both interim values can be set using set_parameter!(::LinearizedDCCost, ::Val{:p}, p) and set_parameter!(::LinearizedDCCost, ::Val{:X}, X), respectively.

Constructor

LinearizedDCCost(g, p, X)
source
Manopt.LinearizedDCGradType
LinearizedDCGrad

A functor (M,X,p) → ℝ to represent the gradient of the inner problem of a ManifoldDifferenceOfConvexObjective. This is a gradient function of the form

\[ F_{p_k,X_k}(p) = g(p) - ⟨X_k, \log_{p_k}p⟩\]

its gradient is given by using $F=F_1(F_2(p))$, where $F_1(X) = ⟨X_k,X⟩$ and $F_2(p) = \log_{p_k}p$ and the chain rule as well as the adjoint differential of the logarithmic map with respect to its argument for $D^*F_2(p)$

\[ \operatorname{grad} F(q) = \operatorname{grad} f(q) - DF_2^*(q)[X]\]

for a point pk and a tangent vector Xk at pk (the outer iterates) that are stored within this functor as well

Fields

  • grad_g!! the gradient of $g$ (see also LinearizedDCCost)
  • pk a point on a manifold
  • Xk a tangent vector at pk

Both interim values can be set using set_parameter!(::LinearizedDCGrad, ::Val{:p}, p) and set_parameter!(::LinearizedDCGrad, ::Val{:X}, X), respectively.

Constructor

LinearizedDCGrad(grad_g, p, X; evaluation=AllocatingEvaluation())

Where you specify whether grad_g is AllocatingEvaluation or InplaceEvaluation, while this function still provides both signatures.

source
Manopt.ManifoldDifferenceOfConvexProximalObjectiveType
ManifoldDifferenceOfConvexProximalObjective{E} <: Problem

Specify an objective difference_of_convex_proximal_point algorithm. The problem is of the form

\[ \operatorname*{argmin}_{p∈\mathcal M} g(p) - h(p)\]

where both $g$ and $h$ are convex, lower semicontinuous and proper.

Fields

  • cost: implementation of $f(p) = g(p)-h(p)$
  • gradient: the gradient of the cost
  • grad_h!!: a function $\operatorname{grad}h: \mathcal M → T\mathcal M$,

Note that both the gradients might be given in two possible signatures as allocating or in-place.

Constructor

ManifoldDifferenceOfConvexProximalObjective(gradh; cost=nothing, gradient=nothing)

an note that neither cost nor gradient are required for the algorithm, just for eventual debug or stopping criteria.

source

as well as for the corresponding sub problems

Manopt.ProximalDCCostType
ProximalDCCost

A functor (M, p) → ℝ to represent the inner cost function of a ManifoldDifferenceOfConvexProximalObjective. This is the cost function of the proximal map of g.

\[ F_{p_k}(p) = \frac{1}{2λ}d_{\mathcal M}(p_k,p)^2 + g(p)\]

for a point pk and a proximal parameter $λ$.

Fields

  • g - a function
  • pk - a point on a manifold
  • λ - the prox parameter

Both interim values can be set using set_parameter!(::ProximalDCCost, ::Val{:p}, p) and set_parameter!(::ProximalDCCost, ::Val{:λ}, λ), respectively.

Constructor

ProximalDCCost(g, p, λ)
source
Manopt.ProximalDCGradType
ProximalDCGrad

A functor (M,X,p) → ℝ to represent the gradient of the inner cost function of a ManifoldDifferenceOfConvexProximalObjective. This is the gradient function of the proximal map cost function of g. Based on

\[ F_{p_k}(p) = \frac{1}{2λ}d_{\mathcal M}(p_k,p)^2 + g(p)\]

it reads

\[ \operatorname{grad} F_{p_k}(p) = \operatorname{grad} g(p) - \frac{1}{λ}\log_p p_k\]

for a point pk and a proximal parameter λ.

Fields

  • grad_g - a gradient function
  • pk - a point on a manifold
  • λ - the prox parameter

Both interim values can be set using set_parameter!(::ProximalDCGrad, ::Val{:p}, p) and set_parameter!(::ProximalDCGrad, ::Val{:λ}, λ), respectively.

Constructor

ProximalDCGrad(grad_g, pk, λ; evaluation=AllocatingEvaluation())

Where you specify whether grad_g is AllocatingEvaluation or InplaceEvaluation, while this function still always provides both signatures.

source

Helper functions

Manopt.get_subtrahend_gradientFunction
X = get_subtrahend_gradient(amp, q)
+get_subtrahend_gradient!(amp, X, q)

Evaluate the (sub)gradient of the subtrahend h from within a ManifoldDifferenceOfConvexObjective amp at the point q (in place of X).

The evaluation is done in place of X for the !-variant. The T=AllocatingEvaluation problem might still allocate memory within. When the non-mutating variant is called with a T=InplaceEvaluation memory for the result is allocated.

source
X = get_subtrahend_gradient(M::AbstractManifold, dcpo::ManifoldDifferenceOfConvexProximalObjective, p)
+get_subtrahend_gradient!(M::AbstractManifold, X, dcpo::ManifoldDifferenceOfConvexProximalObjective, p)

Evaluate the gradient of the subtrahend $h$ from within a ManifoldDifferenceOfConvexProximalObjectivePat the pointp` (in place of X).

source

Technical details

The difference_of_convex_algorithm and difference_of_convex_proximal_point solver requires the following functions of a manifold to be available

By default, one of the stopping criteria is StopWhenChangeLess, which either requires

Literature

[ACOO20]
Y. T. Almeida, J. X. Cruz Neto, P. R. Oliveira and J. C. Oliveira Souza. A modified proximal point method for DC functions on Hadamard manifolds. Computational Optimization and Applications 76, 649–673 (2020).
[BFSS23]
R. Bergmann, O. P. Ferreira, E. M. Santos and J. C. Souza. The difference of convex algorithm on Hadamard manifolds, arXiv preprint (2023).
[SO15]
J. C. Souza and P. R. Oliveira. A proximal point algorithm for DC fuctions on Hadamard manifolds. Journal of Global Optimization 63, 797–810 (2015).
diff --git a/dev/solvers/exact_penalty_method/index.html b/dev/solvers/exact_penalty_method/index.html index a77393cd5f..683cf70932 100644 --- a/dev/solvers/exact_penalty_method/index.html +++ b/dev/solvers/exact_penalty_method/index.html @@ -9,7 +9,7 @@ \end{aligned}\]

where M is a Riemannian manifold, and $f$, $\{g_i\}_{i=1}^{n}$ and $\{h_j\}_{j=1}^{m}$ are twice continuously differentiable functions from M to ℝ. For that a weighted $L_1$-penalty term for the violation of the constraints is added to the objective

\[f(x) + ρ\biggl( \sum_{i=1}^m \max\bigl\{0, g_i(x)\bigr\} + \sum_{j=1}^n \vert h_j(x)\vert\biggr),\]

where $ρ>0$ is the penalty parameter.

Since this is non-smooth, a SmoothingTechnique with parameter u is applied, see the ExactPenaltyCost.

In every step $k$ of the exact penalty method, the smoothed objective is then minimized over all $p ∈\mathcal M$. Then, the accuracy tolerance $ϵ$ and the smoothing parameter $u$ are updated by setting

\[ϵ^{(k)}=\max\{ϵ_{\min}, θ_ϵ ϵ^{(k-1)}\},\]

where $ϵ_{\min}$ is the lowest value $ϵ$ is allowed to become and $θ_ϵ ∈ (0,1)$ is constant scaling factor, and

\[u^{(k)} = \max \{u_{\min}, \theta_u u^{(k-1)} \},\]

where $u_{\min}$ is the lowest value $u$ is allowed to become and $θ_u ∈ (0,1)$ is constant scaling factor.

Finally, the penalty parameter $ρ$ is updated as

\[ρ^{(k)} = \begin{cases} ρ^{(k-1)}/θ_ρ, & \text{if } \displaystyle \max_{j ∈ \mathcal{E},i ∈ \mathcal{I}} \Bigl\{ \vert h_j(x^{(k)}) \vert, g_i(x^{(k)})\Bigr\} \geq u^{(k-1)} \Bigr) ,\\ ρ^{(k-1)}, & \text{else,} -\end{cases}\]

where $θ_ρ ∈ (0,1)$ is a constant scaling factor.

Input

Keyword arguments

if not called with the ConstrainedManifoldObjective cmo

Note that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.

Further keyword arguments

For the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.exact_penalty_method!Function
exact_penalty_method(M, f, grad_f, p=rand(M); kwargs...)
+\end{cases}\]

where $θ_ρ ∈ (0,1)$ is a constant scaling factor.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

if not called with the ConstrainedManifoldObjective cmo

  • g=nothing: the inequality constraints
  • h=nothing: the equality constraints
  • grad_g=nothing: the gradient of the inequality constraints
  • grad_h=nothing: the gradient of the equality constraints

Note that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.

Further keyword arguments

  • ϵ=1e–3: the accuracy tolerance
  • ϵ_exponent=1/100: exponent of the ϵ update factor;
  • ϵ_min=1e-6: the lower bound for the accuracy tolerance
  • u=1e–1: the smoothing parameter and threshold for violation of the constraints
  • u_exponent=1/100: exponent of the u update factor;
  • u_min=1e-6: the lower bound for the smoothing parameter and threshold for violation of the constraints
  • ρ=1.0: the penalty parameter
  • equality_constraints=nothing: the number $n$ of equality constraints. If not provided, a call to the gradient of g is performed to estimate these.
  • gradient_range=nothing: specify how both gradients of the constraints are represented
  • gradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.
  • gradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.
  • inequality_constraints=nothing: the number $m$ of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.
  • min_stepsize=1e-10: the minimal step size
  • smoothing=LogarithmicSumOfExponentials: a SmoothingTechnique to use
  • sub_cost=ExactPenaltyCost(problem, ρ, u; smoothing=smoothing): cost to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_grad=ExactPenaltyGrad(problem, ρ, u; smoothing=smoothing): gradient to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
    • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_stopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(ϵ)|StopWhenStepsizeLess(1e-10): a stopping cirterion for the sub solver This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • sub_state=DefaultManoptProblem(M,ManifoldGradientObjective`(subcost, subgrad; evaluation=evaluation): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used
  • stopping_criterion=StopAfterIteration(300)|(StopWhenSmallerOrEqual(ϵ, ϵ_min)&StopWhenChangeLess(1e-10) ): a functor indicating that the stopping criterion is fulfilled

For the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.exact_penalty_method!Function
exact_penalty_method(M, f, grad_f, p=rand(M); kwargs...)
 exact_penalty_method(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)
 exact_penalty_method!(M, f, grad_f, p; kwargs...)
 exact_penalty_method!(M, cmo::ConstrainedManifoldObjective, p; kwargs...)

perform the exact penalty method (EPM) [LB19] The aim of the EPM is to find a solution of the constrained optimisation task

\[\begin{aligned} @@ -19,11 +19,11 @@ \end{aligned}\]

where M is a Riemannian manifold, and $f$, $\{g_i\}_{i=1}^{n}$ and $\{h_j\}_{j=1}^{m}$ are twice continuously differentiable functions from M to ℝ. For that a weighted $L_1$-penalty term for the violation of the constraints is added to the objective

\[f(x) + ρ\biggl( \sum_{i=1}^m \max\bigl\{0, g_i(x)\bigr\} + \sum_{j=1}^n \vert h_j(x)\vert\biggr),\]

where $ρ>0$ is the penalty parameter.

Since this is non-smooth, a SmoothingTechnique with parameter u is applied, see the ExactPenaltyCost.

In every step $k$ of the exact penalty method, the smoothed objective is then minimized over all $p ∈\mathcal M$. Then, the accuracy tolerance $ϵ$ and the smoothing parameter $u$ are updated by setting

\[ϵ^{(k)}=\max\{ϵ_{\min}, θ_ϵ ϵ^{(k-1)}\},\]

where $ϵ_{\min}$ is the lowest value $ϵ$ is allowed to become and $θ_ϵ ∈ (0,1)$ is constant scaling factor, and

\[u^{(k)} = \max \{u_{\min}, \theta_u u^{(k-1)} \},\]

where $u_{\min}$ is the lowest value $u$ is allowed to become and $θ_u ∈ (0,1)$ is constant scaling factor.

Finally, the penalty parameter $ρ$ is updated as

\[ρ^{(k)} = \begin{cases} ρ^{(k-1)}/θ_ρ, & \text{if } \displaystyle \max_{j ∈ \mathcal{E},i ∈ \mathcal{I}} \Bigl\{ \vert h_j(x^{(k)}) \vert, g_i(x^{(k)})\Bigr\} \geq u^{(k-1)} \Bigr) ,\\ ρ^{(k-1)}, & \text{else,} -\end{cases}\]

where $θ_ρ ∈ (0,1)$ is a constant scaling factor.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

if not called with the ConstrainedManifoldObjective cmo

  • g=nothing: the inequality constraints
  • h=nothing: the equality constraints
  • grad_g=nothing: the gradient of the inequality constraints
  • grad_h=nothing: the gradient of the equality constraints

Note that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.

Further keyword arguments

  • ϵ=1e–3: the accuracy tolerance
  • ϵ_exponent=1/100: exponent of the ϵ update factor;
  • ϵ_min=1e-6: the lower bound for the accuracy tolerance
  • u=1e–1: the smoothing parameter and threshold for violation of the constraints
  • u_exponent=1/100: exponent of the u update factor;
  • u_min=1e-6: the lower bound for the smoothing parameter and threshold for violation of the constraints
  • ρ=1.0: the penalty parameter
  • equality_constraints=nothing: the number $n$ of equality constraints. If not provided, a call to the gradient of g is performed to estimate these.
  • gradient_range=nothing: specify how both gradients of the constraints are represented
  • gradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.
  • gradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.
  • inequality_constraints=nothing: the number $m$ of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.
  • min_stepsize=1e-10: the minimal step size
  • smoothing=LogarithmicSumOfExponentials: a SmoothingTechnique to use
  • sub_cost=ExactPenaltyCost(problem, ρ, u; smoothing=smoothing): cost to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_grad=ExactPenaltyGrad(problem, ρ, u; smoothing=smoothing): gradient to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
    • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_stopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(ϵ)|StopWhenStepsizeLess(1e-10): a stopping cirterion for the sub solver This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • sub_state=DefaultManoptProblem(M,ManifoldGradientObjective`(subcost, subgrad; evaluation=evaluation): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used
  • stopping_criterion=StopAfterIteration(300)|(StopWhenSmallerOrEqual(ϵ, ϵ_min)&StopWhenChangeLess(1e-10) ): a functor indicating that the stopping criterion is fulfilled

For the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ExactPenaltyMethodStateType
ExactPenaltyMethodState{P,T} <: AbstractManoptSolverState

Describes the exact penalty method, with

Fields

  • ϵ: the accuracy tolerance
  • ϵ_min: the lower bound for the accuracy tolerance
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • ρ: the penalty parameter
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • u: the smoothing parameter and threshold for violation of the constraints
  • u_min: the lower bound for the smoothing parameter and threshold for violation of the constraints
  • θ_ϵ: the scaling factor of the tolerance parameter
  • θ_ρ: the scaling factor of the penalty parameter
  • θ_u: the scaling factor of the smoothing parameter

Constructor

ExactPenaltyMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)

construct the exact penalty state.

ExactPenaltyMethodState(M::AbstractManifold, sub_problem;
-    evaluation=AllocatingEvaluation(), kwargs...

)

construct the exact penalty state, where sub_problem is a closed form solution with evaluation as type of evaluation.

Keyword arguments

  • ϵ=1e-3
  • ϵ_min=1e-6
  • ϵ_exponent=1 / 100: a shortcut for the scaling factor $θ_ϵ$
  • θ_ϵ=(ϵ_min / ϵ)^(ϵ_exponent)
  • u=1e-1
  • u_min=1e-6
  • u_exponent=1 / 100: a shortcut for the scaling factor $θ_u$.
  • θ_u=(u_min / u)^(u_exponent)
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • ρ=1.0
  • θ_ρ=0.3
  • stopping_criterion=StopAfterIteration(300)|(: a functor indicating that the stopping criterion is fulfilled StopWhenSmallerOrEqual(:ϵ, ϵ_min)|StopWhenChangeLess(1e-10) )

See also

exact_penalty_method

source

Helping functions

Manopt.ExactPenaltyCostType
ExactPenaltyCost{S, Pr, R}

Represent the cost of the exact penalty method based on a ConstrainedManifoldObjective P and a parameter $ρ$ given by

\[f(p) + ρ\Bigl( +\end{cases}\]

where $θ_ρ ∈ (0,1)$ is a constant scaling factor.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

if not called with the ConstrainedManifoldObjective cmo

  • g=nothing: the inequality constraints
  • h=nothing: the equality constraints
  • grad_g=nothing: the gradient of the inequality constraints
  • grad_h=nothing: the gradient of the equality constraints

Note that one of the pairs (g, grad_g) or (h, grad_h) has to be provided. Otherwise the problem is not constrained and a better solver would be for example quasi_Newton.

Further keyword arguments

  • ϵ=1e–3: the accuracy tolerance
  • ϵ_exponent=1/100: exponent of the ϵ update factor;
  • ϵ_min=1e-6: the lower bound for the accuracy tolerance
  • u=1e–1: the smoothing parameter and threshold for violation of the constraints
  • u_exponent=1/100: exponent of the u update factor;
  • u_min=1e-6: the lower bound for the smoothing parameter and threshold for violation of the constraints
  • ρ=1.0: the penalty parameter
  • equality_constraints=nothing: the number $n$ of equality constraints. If not provided, a call to the gradient of g is performed to estimate these.
  • gradient_range=nothing: specify how both gradients of the constraints are represented
  • gradient_equality_range=gradient_range: specify how gradients of the equality constraints are represented, see VectorGradientFunction.
  • gradient_inequality_range=gradient_range: specify how gradients of the inequality constraints are represented, see VectorGradientFunction.
  • inequality_constraints=nothing: the number $m$ of inequality constraints. If not provided, a call to the gradient of g is performed to estimate these.
  • min_stepsize=1e-10: the minimal step size
  • smoothing=LogarithmicSumOfExponentials: a SmoothingTechnique to use
  • sub_cost=ExactPenaltyCost(problem, ρ, u; smoothing=smoothing): cost to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_grad=ExactPenaltyGrad(problem, ρ, u; smoothing=smoothing): gradient to use in the sub solver This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
    • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_stopping_criterion=StopAfterIteration(200)|StopWhenGradientNormLess(ϵ)|StopWhenStepsizeLess(1e-10): a stopping cirterion for the sub solver This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • sub_state=DefaultManoptProblem(M,ManifoldGradientObjective`(subcost, subgrad; evaluation=evaluation): a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • sub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used
  • stopping_criterion=StopAfterIteration(300)|(StopWhenSmallerOrEqual(ϵ, ϵ_min)&StopWhenChangeLess(1e-10) ): a functor indicating that the stopping criterion is fulfilled

For the ranges of the constraints' gradient, other power manifold tangent space representations, mainly the ArrayPowerRepresentation can be used if the gradients can be computed more efficiently in that representation.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ExactPenaltyMethodStateType
ExactPenaltyMethodState{P,T} <: AbstractManoptSolverState

Describes the exact penalty method, with

Fields

  • ϵ: the accuracy tolerance
  • ϵ_min: the lower bound for the accuracy tolerance
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • ρ: the penalty parameter
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • u: the smoothing parameter and threshold for violation of the constraints
  • u_min: the lower bound for the smoothing parameter and threshold for violation of the constraints
  • θ_ϵ: the scaling factor of the tolerance parameter
  • θ_ρ: the scaling factor of the penalty parameter
  • θ_u: the scaling factor of the smoothing parameter

Constructor

ExactPenaltyMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)

construct the exact penalty state.

ExactPenaltyMethodState(M::AbstractManifold, sub_problem;
+    evaluation=AllocatingEvaluation(), kwargs...

)

construct the exact penalty state, where sub_problem is a closed form solution with evaluation as type of evaluation.

Keyword arguments

  • ϵ=1e-3
  • ϵ_min=1e-6
  • ϵ_exponent=1 / 100: a shortcut for the scaling factor $θ_ϵ$
  • θ_ϵ=(ϵ_min / ϵ)^(ϵ_exponent)
  • u=1e-1
  • u_min=1e-6
  • u_exponent=1 / 100: a shortcut for the scaling factor $θ_u$.
  • θ_u=(u_min / u)^(u_exponent)
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • ρ=1.0
  • θ_ρ=0.3
  • stopping_criterion=StopAfterIteration(300)|(: a functor indicating that the stopping criterion is fulfilled StopWhenSmallerOrEqual(:ϵ, ϵ_min)|StopWhenChangeLess(1e-10) )

See also

exact_penalty_method

source

Helping functions

Manopt.ExactPenaltyCostType
ExactPenaltyCost{S, Pr, R}

Represent the cost of the exact penalty method based on a ConstrainedManifoldObjective P and a parameter $ρ$ given by

\[f(p) + ρ\Bigl( \sum_{i=0}^m \max\{0,g_i(p)\} + \sum_{j=0}^n \lvert h_j(p)\rvert -\Bigr),\]

where an additional parameter $u$ is used as well as a smoothing technique, for example LogarithmicSumOfExponentials or LinearQuadraticHuber to obtain a smooth cost function. This struct is also a functor (M,p) -> v of the cost $v$.

Fields

  • ρ, u: as described in the mathematical formula, .
  • co: the original cost

Constructor

ExactPenaltyCost(co::ConstrainedManifoldObjective, ρ, u; smoothing=LinearQuadraticHuber())
source
Manopt.ExactPenaltyGradType
ExactPenaltyGrad{S, CO, R}

Represent the gradient of the ExactPenaltyCost based on a ConstrainedManifoldObjective co and a parameter $ρ$ and a smoothing technique, which uses an additional parameter $u$.

This struct is also a functor in both formats

  • (M, p) -> X to compute the gradient in allocating fashion.
  • (M, X, p) to compute the gradient in in-place fashion.

Fields

  • ρ, u as stated before
  • co the nonsmooth objective

Constructor

ExactPenaltyGradient(co::ConstrainedManifoldObjective, ρ, u; smoothing=LinearQuadraticHuber())
source
Manopt.LinearQuadraticHuberType
LinearQuadraticHuber <: SmoothingTechnique

Specify a smoothing based on $\max\{0,x\} ≈ \mathcal P(x,u)$ for some $u$, where

\[\mathcal P(x, u) = \begin{cases} +\Bigr),\]

where an additional parameter $u$ is used as well as a smoothing technique, for example LogarithmicSumOfExponentials or LinearQuadraticHuber to obtain a smooth cost function. This struct is also a functor (M,p) -> v of the cost $v$.

Fields

  • ρ, u: as described in the mathematical formula, .
  • co: the original cost

Constructor

ExactPenaltyCost(co::ConstrainedManifoldObjective, ρ, u; smoothing=LinearQuadraticHuber())
source
Manopt.ExactPenaltyGradType
ExactPenaltyGrad{S, CO, R}

Represent the gradient of the ExactPenaltyCost based on a ConstrainedManifoldObjective co and a parameter $ρ$ and a smoothing technique, which uses an additional parameter $u$.

This struct is also a functor in both formats

  • (M, p) -> X to compute the gradient in allocating fashion.
  • (M, X, p) to compute the gradient in in-place fashion.

Fields

  • ρ, u as stated before
  • co the nonsmooth objective

Constructor

ExactPenaltyGradient(co::ConstrainedManifoldObjective, ρ, u; smoothing=LinearQuadraticHuber())
source
Manopt.LinearQuadraticHuberType
LinearQuadraticHuber <: SmoothingTechnique

Specify a smoothing based on $\max\{0,x\} ≈ \mathcal P(x,u)$ for some $u$, where

\[\mathcal P(x, u) = \begin{cases} 0 & \text{ if } x \leq 0,\\ \frac{x^2}{2u} & \text{ if } 0 \leq x \leq u,\\ x-\frac{u}{2} & \text{ if } x \geq u. -\end{cases}\]

source
Manopt.LogarithmicSumOfExponentialsType
LogarithmicSumOfExponentials <: SmoothingTechnique

Specify a smoothing based on $\max\{a,b\} ≈ u \log(\mathrm{e}^{\frac{a}{u}}+\mathrm{e}^{\frac{b}{u}})$ for some $u$.

source

Technical details

The exact_penalty_method solver requires the following functions of a manifold to be available

The stopping criteria involves StopWhenChangeLess and StopWhenGradientNormLess which require

Literature

[LB19]
C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.
+\end{cases}\]

source
Manopt.LogarithmicSumOfExponentialsType
LogarithmicSumOfExponentials <: SmoothingTechnique

Specify a smoothing based on $\max\{a,b\} ≈ u \log(\mathrm{e}^{\frac{a}{u}}+\mathrm{e}^{\frac{b}{u}})$ for some $u$.

source

Technical details

The exact_penalty_method solver requires the following functions of a manifold to be available

The stopping criteria involves StopWhenChangeLess and StopWhenGradientNormLess which require

  • An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= or inverse_retraction_method_dual= (for $\mathcal N$) does not have to be specified or the distance(M, p, q) for said default inverse retraction.
  • the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.

Literature

[LB19]
C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.
diff --git a/dev/solvers/gradient_descent/index.html b/dev/solvers/gradient_descent/index.html index 1a597251a6..bc0111e90c 100644 --- a/dev/solvers/gradient_descent/index.html +++ b/dev/solvers/gradient_descent/index.html @@ -3,13 +3,13 @@ gradient_descent(M, gradient_objective, p=rand(M); kwargs...) gradient_descent!(M, f, grad_f, p; kwargs...) gradient_descent!(M, gradient_objective, p; kwargs...)

perform the gradient descent algorithm

\[p_{k+1} = \operatorname{retr}_{p_k}\bigl( s_k\operatorname{grad}f(p_k) \bigr), -\qquad k=0,1,…\]

where $s_k > 0$ denotes a step size.

The algorithm can be performed in-place of p.

Input

Alternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

If you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.gradient_descent!Function
gradient_descent(M, f, grad_f, p=rand(M); kwargs...)
+\qquad k=0,1,…\]

where $s_k > 0$ denotes a step size.

The algorithm can be performed in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Alternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

If you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.gradient_descent!Function
gradient_descent(M, f, grad_f, p=rand(M); kwargs...)
 gradient_descent(M, gradient_objective, p=rand(M); kwargs...)
 gradient_descent!(M, f, grad_f, p; kwargs...)
 gradient_descent!(M, gradient_objective, p; kwargs...)

perform the gradient descent algorithm

\[p_{k+1} = \operatorname{retr}_{p_k}\bigl( s_k\operatorname{grad}f(p_k) \bigr), -\qquad k=0,1,…\]

where $s_k > 0$ denotes a step size.

The algorithm can be performed in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Alternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

If you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.GradientDescentStateType
GradientDescentState{P,T} <: AbstractGradientSolverState

Describes the state of a gradient based descent algorithm.

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • direction::DirectionUpdateRule : a processor to handle the obtained gradient and compute a direction to “walk into”.
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions

Constructor

GradientDescentState(M::AbstractManifold; kwargs...)

Initialize the gradient descent solver state, where

Input

Keyword arguments

See also

gradient_descent

source

Direction update rules

A field of the options is the direction, a DirectionUpdateRule, which by default IdentityUpdateRule just evaluates the gradient but can be enhanced for example to

Manopt.AverageGradientFunction
AverageGradient(; kwargs...)
-AverageGradient(M::AbstractManifold; kwargs...)

Add an average of gradients to a gradient processor. A set of previous directions (from the inner processor) and the last iterate are stored, average is taken after vector transporting them to the current iterates tangent space.

Input

  • M (optional)

Keyword arguments

  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • direction=IdentityUpdateRule preprocess the actual gradient before adding momentum
  • gradients=[zero_vector(M, p) for _ in 1:n] how to initialise the internal storage
  • n=10 number of gradient evaluations to take the mean over
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
Info

This function generates a ManifoldDefaultsFactory for AverageGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.DirectionUpdateRuleType
DirectionUpdateRule

A general functor, that handles direction update rules. It's fields are usually only a StoreStateAction by default initialized to the fields required for the specific coefficient, but can also be replaced by a (common, global) individual one that provides these values.

source
Manopt.IdentityUpdateRuleType
IdentityUpdateRule <: DirectionUpdateRule

The default gradient direction update is the identity, usually it just evaluates the gradient.

You can also use Gradient() to create the corresponding factory, though this only delays this parameter-free instantiation to later.

source
Manopt.MomentumGradientFunction
MomentumGradient()

Append a momentum to a gradient processor, where the last direction and last iterate are stored and the new is composed as $η_i = m*η_{i-1}' - s d_i$, where $sd_i$ is the current (inner) direction and $η_{i-1}'$ is the vector transported last direction multiplied by momentum $m$.

Input

  • M (optional)

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for MomentumGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.NesterovFunction
Nesterov(; kwargs...)
-Nesterov(M::AbstractManifold; kwargs...)

Assume $f$ is $L$-Lipschitz and $μ$-strongly convex. Given

  • a step size $h_k<\frac{1}{L}$ (from the GradientDescentState
  • a shrinkage parameter $β_k$
  • and a current iterate $p_k$
  • as well as the interim values $γ_k$ and $v_k$ from the previous iterate.

This compute a Nesterov type update using the following steps, see [ZS18]

  1. Compute the positive root $α_k∈(0,1)$ of $α^2 = h_k\bigl((1-α_k)γ_k+α_k μ\bigr)$.
  2. Set $\barγ_k+1 = (1-α_k)γ_k + α_kμ$
  3. $y_k = \operatorname{retr}_{p_k}\Bigl(\frac{α_kγ_k}{γ_k + α_kμ}\operatorname{retr}^{-1}_{p_k}v_k \Bigr)$
  4. $x_{k+1} = \operatorname{retr}_{y_k}(-h_k \operatorname{grad}f(y_k))$
  5. $v_{k+1} = \operatorname{retr}_{y_k}\Bigl(\frac{(1-α_k)γ_k}{\barγ_k}\operatorname{retr}_{y_k}^{-1}(v_k) - \frac{α_k}{\barγ_{k+1}}\operatorname{grad}f(y_k) \Bigr)$
  6. $γ_{k+1} = \frac{1}{1+β_k}\barγ_{k+1}$

Then the direction from $p_k$ to $p_k+1$ by $d = \operatorname{retr}^{-1}_{p_k}p_{k+1}$ is returned.

Input

  • M (optional)

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for NesterovRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source

which internally use the ManifoldDefaultsFactory and produce the internal elements

Manopt.AverageGradientRuleType
AverageGradientRule <: DirectionUpdateRule

Add an average of gradients to a gradient processor. A set of previous directions (from the inner processor) and the last iterate are stored. The average is taken after vector transporting them to the current iterates tangent space.

Fields

Constructors

AverageGradientRule(
+\qquad k=0,1,…\]

where $s_k > 0$ denotes a step size.

The algorithm can be performed in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Alternatively to f and grad_f you can provide the corresponding AbstractManifoldGradientObjective gradient_objective directly.

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

If you provide the ManifoldGradientObjective directly, the evaluation= keyword is ignored. The decorations are still applied to the objective.

If you activate tutorial mode (cf. is_tutorial_mode), this solver provides additional debug warnings.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.GradientDescentStateType
GradientDescentState{P,T} <: AbstractGradientSolverState

Describes the state of a gradient based descent algorithm.

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • direction::DirectionUpdateRule : a processor to handle the obtained gradient and compute a direction to “walk into”.
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions

Constructor

GradientDescentState(M::AbstractManifold; kwargs...)

Initialize the gradient descent solver state, where

Input

Keyword arguments

See also

gradient_descent

source

Direction update rules

A field of the options is the direction, a DirectionUpdateRule, which by default IdentityUpdateRule just evaluates the gradient but can be enhanced for example to

Manopt.AverageGradientFunction
AverageGradient(; kwargs...)
+AverageGradient(M::AbstractManifold; kwargs...)

Add an average of gradients to a gradient processor. A set of previous directions (from the inner processor) and the last iterate are stored, average is taken after vector transporting them to the current iterates tangent space.

Input

  • M (optional)

Keyword arguments

  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • direction=IdentityUpdateRule preprocess the actual gradient before adding momentum
  • gradients=[zero_vector(M, p) for _ in 1:n] how to initialise the internal storage
  • n=10 number of gradient evaluations to take the mean over
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
Info

This function generates a ManifoldDefaultsFactory for AverageGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.DirectionUpdateRuleType
DirectionUpdateRule

A general functor, that handles direction update rules. It's fields are usually only a StoreStateAction by default initialized to the fields required for the specific coefficient, but can also be replaced by a (common, global) individual one that provides these values.

source
Manopt.IdentityUpdateRuleType
IdentityUpdateRule <: DirectionUpdateRule

The default gradient direction update is the identity, usually it just evaluates the gradient.

You can also use Gradient() to create the corresponding factory, though this only delays this parameter-free instantiation to later.

source
Manopt.MomentumGradientFunction
MomentumGradient()

Append a momentum to a gradient processor, where the last direction and last iterate are stored and the new is composed as $η_i = m*η_{i-1}' - s d_i$, where $sd_i$ is the current (inner) direction and $η_{i-1}'$ is the vector transported last direction multiplied by momentum $m$.

Input

  • M (optional)

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for MomentumGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source
Manopt.NesterovFunction
Nesterov(; kwargs...)
+Nesterov(M::AbstractManifold; kwargs...)

Assume $f$ is $L$-Lipschitz and $μ$-strongly convex. Given

  • a step size $h_k<\frac{1}{L}$ (from the GradientDescentState
  • a shrinkage parameter $β_k$
  • and a current iterate $p_k$
  • as well as the interim values $γ_k$ and $v_k$ from the previous iterate.

This compute a Nesterov type update using the following steps, see [ZS18]

  1. Compute the positive root $α_k∈(0,1)$ of $α^2 = h_k\bigl((1-α_k)γ_k+α_k μ\bigr)$.
  2. Set $\barγ_k+1 = (1-α_k)γ_k + α_kμ$
  3. $y_k = \operatorname{retr}_{p_k}\Bigl(\frac{α_kγ_k}{γ_k + α_kμ}\operatorname{retr}^{-1}_{p_k}v_k \Bigr)$
  4. $x_{k+1} = \operatorname{retr}_{y_k}(-h_k \operatorname{grad}f(y_k))$
  5. $v_{k+1} = \operatorname{retr}_{y_k}\Bigl(\frac{(1-α_k)γ_k}{\barγ_k}\operatorname{retr}_{y_k}^{-1}(v_k) - \frac{α_k}{\barγ_{k+1}}\operatorname{grad}f(y_k) \Bigr)$
  6. $γ_{k+1} = \frac{1}{1+β_k}\barγ_{k+1}$

Then the direction from $p_k$ to $p_k+1$ by $d = \operatorname{retr}^{-1}_{p_k}p_{k+1}$ is returned.

Input

  • M (optional)

Keyword arguments

Info

This function generates a ManifoldDefaultsFactory for NesterovRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source

which internally use the ManifoldDefaultsFactory and produce the internal elements

Manopt.AverageGradientRuleType
AverageGradientRule <: DirectionUpdateRule

Add an average of gradients to a gradient processor. A set of previous directions (from the inner processor) and the last iterate are stored. The average is taken after vector transporting them to the current iterates tangent space.

Fields

Constructors

AverageGradientRule(
     M::AbstractManifold;
     p::P=rand(M);
     n::Int=10
@@ -17,4 +17,4 @@
     gradients = fill(zero_vector(p.M, o.x),n),
     last_iterate = deepcopy(x0),
     vector_transport_method = default_vector_transport_method(M, typeof(p))
-)

Add average to a gradient problem, where

source
Manopt.MomentumGradientRuleType
MomentumGradientRule <: DirectionUpdateRule

Store the necessary information to compute the MomentumGradient direction update.

Fields

  • p_old::P: a point on the manifold $\mathcal M$
  • momentum::Real: factor for the momentum
  • direction: internal DirectionUpdateRule to determine directions to add the momentum to.
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • X_old::T: a tangent vector at the point $p$ on the manifold $\mathcal M$

Constructors

MomentumGradientRule(M::AbstractManifold; kwargs...)

Initialize a momentum gradient rule to s, where p and X are memory for interim values.

Keyword arguments

See also

MomentumGradient

source
Manopt.NesterovRuleType
NesterovRule <: DirectionUpdateRule

Compute a Nesterov inspired direction update rule. See Nesterov for details

Fields

Constructor

NesterovRule(M::AbstractManifold; kwargs...)

Keyword arguments

See also

Nesterov

source

Debug actions

Manopt.DebugGradientType
DebugGradient <: DebugAction

debug for the gradient evaluated at the current iterate

Constructors

DebugGradient(; long=false, prefix= , format= "$prefix%s", io=stdout)

display the short (false) or long (true) default text for the gradient, or set the prefix manually. Alternatively the complete format can be set.

source
Manopt.DebugGradientNormType
DebugGradientNorm <: DebugAction

debug for gradient evaluated at the current iterate.

Constructors

DebugGradientNorm([long=false,p=print])

display the short (false) or long (true) default text for the gradient norm.

DebugGradientNorm(prefix[, p=print])

display the a prefix in front of the gradient norm.

source
Manopt.DebugStepsizeType
DebugStepsize <: DebugAction

debug for the current step size.

Constructors

DebugStepsize(;long=false,prefix="step size:", format="$prefix%s", io=stdout)

display the a prefix in front of the step size.

source

Record actions

Manopt.RecordGradientType
RecordGradient <: RecordAction

record the gradient evaluated at the current iterate

Constructors

RecordGradient(ξ)

initialize the RecordAction to the corresponding type of the tangent vector.

source

Technical details

The gradient_descent solver requires the following functions of a manifold to be available

Literature

[Lue72]
D. G. Luenberger. The gradient projection method along geodesics. Management Science 18, 620–631 (1972).
[ZS18]
H. Zhang and S. Sra. Towards Riemannian accelerated gradient methods, arXiv Preprint, 1806.02812 (2018).
+)

Add average to a gradient problem, where

source
Manopt.MomentumGradientRuleType
MomentumGradientRule <: DirectionUpdateRule

Store the necessary information to compute the MomentumGradient direction update.

Fields

  • p_old::P: a point on the manifold $\mathcal M$
  • momentum::Real: factor for the momentum
  • direction: internal DirectionUpdateRule to determine directions to add the momentum to.
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • X_old::T: a tangent vector at the point $p$ on the manifold $\mathcal M$

Constructors

MomentumGradientRule(M::AbstractManifold; kwargs...)

Initialize a momentum gradient rule to s, where p and X are memory for interim values.

Keyword arguments

See also

MomentumGradient

source
Manopt.NesterovRuleType
NesterovRule <: DirectionUpdateRule

Compute a Nesterov inspired direction update rule. See Nesterov for details

Fields

Constructor

NesterovRule(M::AbstractManifold; kwargs...)

Keyword arguments

See also

Nesterov

source

Debug actions

Manopt.DebugGradientType
DebugGradient <: DebugAction

debug for the gradient evaluated at the current iterate

Constructors

DebugGradient(; long=false, prefix= , format= "$prefix%s", io=stdout)

display the short (false) or long (true) default text for the gradient, or set the prefix manually. Alternatively the complete format can be set.

source
Manopt.DebugGradientNormType
DebugGradientNorm <: DebugAction

debug for gradient evaluated at the current iterate.

Constructors

DebugGradientNorm([long=false,p=print])

display the short (false) or long (true) default text for the gradient norm.

DebugGradientNorm(prefix[, p=print])

display the a prefix in front of the gradient norm.

source
Manopt.DebugStepsizeType
DebugStepsize <: DebugAction

debug for the current step size.

Constructors

DebugStepsize(;long=false,prefix="step size:", format="$prefix%s", io=stdout)

display the a prefix in front of the step size.

source

Record actions

Manopt.RecordGradientType
RecordGradient <: RecordAction

record the gradient evaluated at the current iterate

Constructors

RecordGradient(ξ)

initialize the RecordAction to the corresponding type of the tangent vector.

source

Technical details

The gradient_descent solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.
  • By default gradient descent uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).
  • By default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.
  • By default the tangent vector storing the gradient is initialized calling zero_vector(M,p).

Literature

[Lue72]
D. G. Luenberger. The gradient projection method along geodesics. Management Science 18, 620–631 (1972).
[ZS18]
H. Zhang and S. Sra. Towards Riemannian accelerated gradient methods, arXiv Preprint, 1806.02812 (2018).
diff --git a/dev/solvers/index.html b/dev/solvers/index.html index ea58fe7f9f..b85e2e9c65 100644 --- a/dev/solvers/index.html +++ b/dev/solvers/index.html @@ -2,10 +2,10 @@ List of Solvers · Manopt.jl

Available solvers in Manopt.jl

Optimisation problems can be classified with respect to several criteria. The following list of the algorithms is a grouped with respect to the “information” available about a optimisation problem

\[\operatorname*{arg\,min}_{p∈\mathbb M} f(p)\]

Within each group short notes on advantages of the individual solvers, and required properties the cost $f$ should have, are provided. In that list a 🏅 is used to indicate state-of-the-art solvers, that usually perform best in their corresponding group and 🫏 for a maybe not so fast, maybe not so state-of-the-art method, that nevertheless gets the job done most reliably.

Derivative free

For derivative free only function evaluations of $f$ are used.

  • Nelder-Mead a simplex based variant, that is using $d+1$ points, where $d$ is the dimension of the manifold.
  • Particle Swarm 🫏 use the evolution of a set of points, called swarm, to explore the domain of the cost and find a minimizer.
  • CMA-ES uses a stochastic evolutionary strategy to perform minimization robust to local minima of the objective.

First order

Gradient

  • Gradient Descent uses the gradient from $f$ to determine a descent direction. Here, the direction can also be changed to be Averaged, Momentum-based, based on Nesterovs rule.
  • Conjugate Gradient Descent uses information from the previous descent direction to improve the current (gradient-based) one including several such update rules.
  • The Quasi-Newton Method 🏅 uses gradient evaluations to approximate the Hessian, which is then used in a Newton-like scheme, where both a limited memory and a full Hessian approximation are available with several different update rules.
  • Steihaug-Toint Truncated Conjugate-Gradient Method a solver for a constrained problem defined on a tangent space.

Subgradient

The following methods require the Riemannian subgradient $∂f$ to be available. While the subgradient might be set-valued, the function should provide one of the subgradients.

  • The Subgradient Method takes the negative subgradient as a step direction and can be combined with a step size.
  • The Convex Bundle Method (CBM) uses a former collection of sub gradients at the previous iterates and iterate candidates to solve a local approximation to f in every iteration by solving a quadratic problem in the tangent space.
  • The Proximal Bundle Method works similar to CBM, but solves a proximal map-based problem in every iteration.

Second order

Splitting based

For splitting methods, the algorithms are based on splitting the cost into different parts, usually in a sum of two or more summands. This is usually very well tailored for non-smooth objectives.

Smooth

The following methods require that the splitting, for example into several summands, is smooth in the sense that for every summand of the cost, the gradient should still exist everywhere

  • Levenberg-Marquardt minimizes the square norm of $f: \mathcal M→ℝ^d$ provided the gradients of the component functions, or in other words the Jacobian of $f$.
  • Stochastic Gradient Descent is based on a splitting of $f$ into a sum of several components $f_i$ whose gradients are provided. Steps are performed according to gradients of randomly selected components.
  • The Alternating Gradient Descent alternates gradient descent steps on the components of the product manifold. All these components should be smooth as it is required, that the gradient exists, and is (locally) convex.

Nonsmooth

If the gradient does not exist everywhere, that is if the splitting yields summands that are nonsmooth, usually methods based on proximal maps are used.

  • The Chambolle-Pock algorithm uses a splitting $f(p) = F(p) + G(Λ(p))$, where $G$ is defined on a manifold $\mathcal N$ and the proximal map of its Fenchel dual is required. Both these functions can be non-smooth.
  • The Cyclic Proximal Point 🫏 uses proximal maps of the functions from splitting $f$ into summands $f_i$
  • Difference of Convex Algorithm (DCA) uses a splitting of the (non-convex) function $f = g - h$ into a difference of two functions; for each of these it is required to have access to the gradient of $g$ and the subgradient of $h$ to state a sub problem in every iteration to be solved.
  • Difference of Convex Proximal Point uses a splitting of the (non-convex) function $f = g - h$ into a difference of two functions; provided the proximal map of $g$ and the subgradient of $h$, the next iterate is computed. Compared to DCA, the corresponding sub problem is here written in a form that yields the proximal map.
  • Douglas—Rachford uses a splitting $f(p) = F(x) + G(x)$ and their proximal maps to compute a minimizer of $f$, which can be non-smooth.
  • Primal-dual Riemannian semismooth Newton Algorithm extends Chambolle-Pock and requires the differentials of the proximal maps additionally.
  • The Proximal Point uses the proximal map of $f$ iteratively.

Constrained

Constrained problems of the form

\[\begin{align*} \operatorname*{arg\,min}_{p∈\mathbb M}& f(p)\\ \text{such that } & g(p) \leq 0\\&h(p) = 0 -\end{align*}\]

For these you can use

  • The Augmented Lagrangian Method (ALM), where both g and grad_g as well as h and grad_h are keyword arguments, and one of these pairs is mandatory.
  • The Exact Penalty Method (EPM) uses a penalty term instead of augmentation, but has the same interface as ALM.
  • The Interior Point Newton Method (IPM) rephrases the KKT system of a constrained problem into an Newton iteration being performed in every iteration.
  • Frank-Wolfe algorithm, where besides the gradient of $f$ either a closed form solution or a (maybe even automatically generated) sub problem solver for $\operatorname*{arg\,min}_{q ∈ C} ⟨\operatorname{grad} f(p_k), \log_{p_k}q⟩$ is required, where $p_k$ is a fixed point on the manifold (changed in every iteration).

On the tangent space

Alphabetical list List of algorithms

SolverFunctionState
Adaptive Regularisation with Cubicsadaptive_regularization_with_cubicsAdaptiveRegularizationState
Augmented Lagrangian Methodaugmented_Lagrangian_methodAugmentedLagrangianMethodState
Chambolle-PockChambollePockChambollePockState
Conjugate Gradient Descentconjugate_gradient_descentConjugateGradientDescentState
Conjugate Residualconjugate_residualConjugateResidualState
Convex Bundle Methodconvex_bundle_methodConvexBundleMethodState
Cyclic Proximal Pointcyclic_proximal_pointCyclicProximalPointState
Difference of Convex Algorithmdifference_of_convex_algorithmDifferenceOfConvexState
Difference of Convex Proximal Pointdifference_of_convex_proximal_pointDifferenceOfConvexProximalState
Douglas—RachfordDouglasRachfordDouglasRachfordState
Exact Penalty Methodexact_penalty_methodExactPenaltyMethodState
Frank-Wolfe algorithmFrank_Wolfe_methodFrankWolfeState
Gradient Descentgradient_descentGradientDescentState
Interior Point Newtoninterior_point_Newton
Levenberg-MarquardtLevenbergMarquardtLevenbergMarquardtState
Nelder-MeadNelderMeadNelderMeadState
Particle Swarmparticle_swarmParticleSwarmState
Primal-dual Riemannian semismooth Newton Algorithmprimal_dual_semismooth_NewtonPrimalDualSemismoothNewtonState
Proximal Bundle Methodproximal_bundle_methodProximalBundleMethodState
Proximal Pointproximal_pointProximalPointState
Quasi-Newton Methodquasi_NewtonQuasiNewtonState
Steihaug-Toint Truncated Conjugate-Gradient Methodtruncated_conjugate_gradient_descentTruncatedConjugateGradientState
Subgradient Methodsubgradient_methodSubGradientMethodState
Stochastic Gradient Descentstochastic_gradient_descentStochasticGradientDescentState
Riemannian Trust-Regionstrust_regionsTrustRegionsState

Note that the solvers (their AbstractManoptSolverState, to be precise) can also be decorated to enhance your algorithm by general additional properties, see debug output and recording values. This is done using the debug= and record= keywords in the function calls. Similarly, a cache= keyword is available in any of the function calls, that wraps the AbstractManoptProblem in a cache for certain parts of the objective.

Technical details

The main function a solver calls is

which is a framework that you in general should not change or redefine. It uses the following methods, which also need to be implemented on your own algorithm, if you want to provide one.

Manopt.initialize_solver!Function
initialize_solver!(ams::AbstractManoptProblem, amp::AbstractManoptSolverState)

Initialize the solver to the optimization AbstractManoptProblem amp by initializing the necessary values in the AbstractManoptSolverState amp.

source
initialize_solver!(amp::AbstractManoptProblem, dss::DebugSolverState)

Extend the initialization of the solver by a hook to run the DebugAction that was added to the :Start entry of the debug lists. All others are triggered (with iteration number 0) to trigger possible resets

source
initialize_solver!(ams::AbstractManoptProblem, rss::RecordSolverState)

Extend the initialization of the solver by a hook to run records that were added to the :Start entry.

source
Manopt.step_solver!Function
step_solver!(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, k)

Do one iteration step (the ith) for an AbstractManoptProblemp by modifying the values in the AbstractManoptSolverState ams.

source
step_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)

Extend the ith step of the solver by a hook to run debug prints, that were added to the :BeforeIteration and :Iteration entries of the debug lists.

source
step_solver!(amp::AbstractManoptProblem, rss::RecordSolverState, k)

Extend the ith step of the solver by a hook to run records, that were added to the :Iteration entry.

source
Manopt.get_solver_resultFunction
get_solver_result(ams::AbstractManoptSolverState)
+\end{align*}\]

For these you can use

  • The Augmented Lagrangian Method (ALM), where both g and grad_g as well as h and grad_h are keyword arguments, and one of these pairs is mandatory.
  • The Exact Penalty Method (EPM) uses a penalty term instead of augmentation, but has the same interface as ALM.
  • The Interior Point Newton Method (IPM) rephrases the KKT system of a constrained problem into an Newton iteration being performed in every iteration.
  • Frank-Wolfe algorithm, where besides the gradient of $f$ either a closed form solution or a (maybe even automatically generated) sub problem solver for $\operatorname*{arg\,min}_{q ∈ C} ⟨\operatorname{grad} f(p_k), \log_{p_k}q⟩$ is required, where $p_k$ is a fixed point on the manifold (changed in every iteration).

On the tangent space

Alphabetical list List of algorithms

SolverFunctionState
Adaptive Regularisation with Cubicsadaptive_regularization_with_cubicsAdaptiveRegularizationState
Augmented Lagrangian Methodaugmented_Lagrangian_methodAugmentedLagrangianMethodState
Chambolle-PockChambollePockChambollePockState
Conjugate Gradient Descentconjugate_gradient_descentConjugateGradientDescentState
Conjugate Residualconjugate_residualConjugateResidualState
Convex Bundle Methodconvex_bundle_methodConvexBundleMethodState
Cyclic Proximal Pointcyclic_proximal_pointCyclicProximalPointState
Difference of Convex Algorithmdifference_of_convex_algorithmDifferenceOfConvexState
Difference of Convex Proximal Pointdifference_of_convex_proximal_pointDifferenceOfConvexProximalState
Douglas—RachfordDouglasRachfordDouglasRachfordState
Exact Penalty Methodexact_penalty_methodExactPenaltyMethodState
Frank-Wolfe algorithmFrank_Wolfe_methodFrankWolfeState
Gradient Descentgradient_descentGradientDescentState
Interior Point Newtoninterior_point_Newton
Levenberg-MarquardtLevenbergMarquardtLevenbergMarquardtState
Nelder-MeadNelderMeadNelderMeadState
Particle Swarmparticle_swarmParticleSwarmState
Primal-dual Riemannian semismooth Newton Algorithmprimal_dual_semismooth_NewtonPrimalDualSemismoothNewtonState
Proximal Bundle Methodproximal_bundle_methodProximalBundleMethodState
Proximal Pointproximal_pointProximalPointState
Quasi-Newton Methodquasi_NewtonQuasiNewtonState
Steihaug-Toint Truncated Conjugate-Gradient Methodtruncated_conjugate_gradient_descentTruncatedConjugateGradientState
Subgradient Methodsubgradient_methodSubGradientMethodState
Stochastic Gradient Descentstochastic_gradient_descentStochasticGradientDescentState
Riemannian Trust-Regionstrust_regionsTrustRegionsState

Note that the solvers (their AbstractManoptSolverState, to be precise) can also be decorated to enhance your algorithm by general additional properties, see debug output and recording values. This is done using the debug= and record= keywords in the function calls. Similarly, a cache= keyword is available in any of the function calls, that wraps the AbstractManoptProblem in a cache for certain parts of the objective.

Technical details

The main function a solver calls is

which is a framework that you in general should not change or redefine. It uses the following methods, which also need to be implemented on your own algorithm, if you want to provide one.

Manopt.initialize_solver!Function
initialize_solver!(ams::AbstractManoptProblem, amp::AbstractManoptSolverState)

Initialize the solver to the optimization AbstractManoptProblem amp by initializing the necessary values in the AbstractManoptSolverState amp.

source
initialize_solver!(amp::AbstractManoptProblem, dss::DebugSolverState)

Extend the initialization of the solver by a hook to run the DebugAction that was added to the :Start entry of the debug lists. All others are triggered (with iteration number 0) to trigger possible resets

source
initialize_solver!(ams::AbstractManoptProblem, rss::RecordSolverState)

Extend the initialization of the solver by a hook to run records that were added to the :Start entry.

source
Manopt.step_solver!Function
step_solver!(amp::AbstractManoptProblem, ams::AbstractManoptSolverState, k)

Do one iteration step (the ith) for an AbstractManoptProblemp by modifying the values in the AbstractManoptSolverState ams.

source
step_solver!(amp::AbstractManoptProblem, dss::DebugSolverState, k)

Extend the ith step of the solver by a hook to run debug prints, that were added to the :BeforeIteration and :Iteration entries of the debug lists.

source
step_solver!(amp::AbstractManoptProblem, rss::RecordSolverState, k)

Extend the ith step of the solver by a hook to run records, that were added to the :Iteration entry.

source
Manopt.get_solver_resultFunction
get_solver_result(ams::AbstractManoptSolverState)
 get_solver_result(tos::Tuple{AbstractManifoldObjective,AbstractManoptSolverState})
-get_solver_result(o::AbstractManifoldObjective, s::AbstractManoptSolverState)

Return the final result after all iterations that is stored within the AbstractManoptSolverState ams, which was modified during the iterations.

For the case the objective is passed as well, but default, the objective is ignored, and the solver result for the state is called.

source
Manopt.get_solver_returnFunction
get_solver_return(s::AbstractManoptSolverState)
+get_solver_result(o::AbstractManifoldObjective, s::AbstractManoptSolverState)

Return the final result after all iterations that is stored within the AbstractManoptSolverState ams, which was modified during the iterations.

For the case the objective is passed as well, but default, the objective is ignored, and the solver result for the state is called.

source
Manopt.get_solver_returnFunction
get_solver_return(s::AbstractManoptSolverState)
 get_solver_return(o::AbstractManifoldObjective, s::AbstractManoptSolverState)

determine the result value of a call to a solver. By default this returns the same as get_solver_result.

get_solver_return(s::ReturnSolverState)
-get_solver_return(o::AbstractManifoldObjective, s::ReturnSolverState)

return the internally stored state of the ReturnSolverState instead of the minimizer. This means that when the state are decorated like this, the user still has to call get_solver_result on the internal state separately.

get_solver_return(o::ReturnManifoldObjective, s::AbstractManoptSolverState)

return both the objective and the state as a tuple.

source

API for solvers

this is a short overview of the different types of high-level functions are usually available for a solver. Assume the solver is called new_solver and requires a cost f and some first order information df as well as a starting point p on M. f and df form the objective together called obj.

Then there are basically two different variants to call

The easy to access call

new_solver(M, f, df, p=rand(M); kwargs...)
+get_solver_return(o::AbstractManifoldObjective, s::ReturnSolverState)

return the internally stored state of the ReturnSolverState instead of the minimizer. This means that when the state are decorated like this, the user still has to call get_solver_result on the internal state separately.

get_solver_return(o::ReturnManifoldObjective, s::AbstractManoptSolverState)

return both the objective and the state as a tuple.

source

API for solvers

this is a short overview of the different types of high-level functions are usually available for a solver. Assume the solver is called new_solver and requires a cost f and some first order information df as well as a starting point p on M. f and df form the objective together called obj.

Then there are basically two different variants to call

The easy to access call

new_solver(M, f, df, p=rand(M); kwargs...)
 new_solver!(M, f, df, p; kwargs...)

Where the start point should be optional. Keyword arguments include the type of evaluation, decorators like debug= or record= as well as algorithm specific ones. If you provide an immutable point p or the rand(M) point is immutable, like on the Circle() this method should turn the point into a mutable one as well.

The third variant works in place of p, so it is mandatory.

This first interface would set up the objective and pass all keywords on the objective based call.

Objective based calls to solvers

new_solver(M, obj, p=rand(M); kwargs...)
-new_solver!(M, obj, p; kwargs...)

Here the objective would be created beforehand for example to compare different solvers on the same objective, and for the first variant the start point is optional. Keyword arguments include decorators like debug= or record= as well as algorithm specific ones.

This variant would generate the problem and the state and verify validity of all provided keyword arguments that affect the state. Then it would call the iterate process.

Manual calls

If you generate the corresponding problem and state as the previous step does, you can also use the third (lowest level) and just call

solve!(problem, state)

Closed-form subsolvers

If a subsolver solution is available in closed form, ClosedFormSubSolverState is used to indicate that.

Manopt.ClosedFormSubSolverStateType
ClosedFormSubSolverState{E<:AbstractEvaluationType} <: AbstractManoptSolverState

Subsolver state indicating that a closed-form solution is available with AbstractEvaluationType E.

Constructor

ClosedFormSubSolverState(; evaluation=AllocatingEvaluation())
source
+new_solver!(M, obj, p; kwargs...)

Here the objective would be created beforehand for example to compare different solvers on the same objective, and for the first variant the start point is optional. Keyword arguments include decorators like debug= or record= as well as algorithm specific ones.

This variant would generate the problem and the state and verify validity of all provided keyword arguments that affect the state. Then it would call the iterate process.

Manual calls

If you generate the corresponding problem and state as the previous step does, you can also use the third (lowest level) and just call

solve!(problem, state)

Closed-form subsolvers

If a subsolver solution is available in closed form, ClosedFormSubSolverState is used to indicate that.

Manopt.ClosedFormSubSolverStateType
ClosedFormSubSolverState{E<:AbstractEvaluationType} <: AbstractManoptSolverState

Subsolver state indicating that a closed-form solution is available with AbstractEvaluationType E.

Constructor

ClosedFormSubSolverState(; evaluation=AllocatingEvaluation())
source
diff --git a/dev/solvers/interior_point_Newton/index.html b/dev/solvers/interior_point_Newton/index.html index 041ab68c95..c215c28ac9 100644 --- a/dev/solvers/interior_point_Newton/index.html +++ b/dev/solvers/interior_point_Newton/index.html @@ -8,7 +8,7 @@ \quad & h_j(p)=0 \quad \text{ for } j=1,…,n, \end{aligned}\]

This algorithms iteratively solves the linear system based on extending the KKT system by a slack variable s.

\[\operatorname{J} F(p, μ, λ, s)[X, Y, Z, W] = -F(p, μ, λ, s), \text{ where } -X ∈ T_{p}\mathcal M, Y,W ∈ ℝ^m, Z ∈ ℝ^n,\]

see CondensedKKTVectorFieldJacobian and CondensedKKTVectorField, respectively, for the reduced form, this is usually solved in. From the resulting X and Z in the reeuced form, the other two, $Y$, $W$, are then computed.

From the gradient $(X,Y,Z,W)$ at the current iterate $(p, μ, λ, s)$, a line search is performed using the KKTVectorFieldNormSq norm of the KKT vector field (squared) and its gradient KKTVectorFieldNormSqGradient together with the InteriorPointCentralityCondition.

Note that since the vector field $F$ includes the gradients of the constraint functions $g, h$, its gradient or Jacobian requires the Hessians of the constraints.

For that seach direction a line search is performed, that additionally ensures that the constraints are further fulfilled.

Input

  • M: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$

or a ConstrainedManifoldObjective cmo containing f, grad_f, Hess_f, and the constraints

Keyword arguments

The keyword arguments related to the constraints (the first eleven) are ignored if you pass a ConstrainedManifoldObjective cmo

  • centrality_condition=missing; an additional condition when to accept a step size. This can be used to ensure that the resulting iterate is still an interior point if you provide a check (N,q) -> true/false, where N is the manifold of the step_problem.
  • equality_constraints=nothing: the number $n$ of equality constraints.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • g=nothing: the inequality constraints
  • grad_g=nothing: the gradient of the inequality constraints
  • grad_h=nothing: the gradient of the equality constraints
  • gradient_range=nothing: specify how gradients are represented, where nothing is equivalent to NestedPowerRepresentation
  • gradient_equality_range=gradient_range: specify how the gradients of the equality constraints are represented
  • gradient_inequality_range=gradient_range: specify how the gradients of the inequality constraints are represented
  • h=nothing: the equality constraints
  • Hess_g=nothing: the Hessian of the inequality constraints
  • Hess_h=nothing: the Hessian of the equality constraints
  • inequality_constraints=nothing: the number $m$ of inequality constraints.
  • λ=ones(length(h(M, p))): the Lagrange multiplier with respect to the equality constraints $h$
  • μ=ones(length(g(M, p))): the Lagrange multiplier with respect to the inequality constraints $g$
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • ρ=μ's / length(μ): store the orthogonality μ's/m to compute the barrier parameter β in the sub problem.
  • s=copy(μ): initial value for the slack variables
  • σ=calculate_σ(M, cmo, p, μ, λ, s): scaling factor for the barrier parameter β in the sub problem, which is updated during the iterations
  • step_objective: a ManifoldGradientObjective of the norm of the KKT vector field KKTVectorFieldNormSq and its gradient KKTVectorFieldNormSqGradient
  • step_problem: the manifold $\mathcal M × ℝ^m × ℝ^n × ℝ^m$ together with the step_objective as the problem the linesearch stepsize= employs for determining a step size
  • step_state: the StepsizeState with point and search direction
  • stepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size with the centrality_condtion keyword as additional criterion to accept a step, if this is provided
  • stopping_criterion=StopAfterIteration(200)|StopWhenKKTResidualLess(1e-8): a functor indicating that the stopping criterion is fulfilled a stopping criterion, by default depending on the residual of the KKT vector field or a maximal number of steps, which ever hits first.
  • sub_kwargs=(;): keyword arguments to decorate the sub options, for example debug, that automatically respects the main solvers debug options (like sub-sampling) as well
  • sub_objective: The SymmetricLinearSystemObjective modelling the system of equations to use in the sub solver, includes the CondensedKKTVectorFieldJacobian $\mathcal A(X)$ and the CondensedKKTVectorField $b$ in $\mathcal A(X) + b = 0$ we aim to solve. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_stopping_criterion=StopAfterIteration(manifold_dimension(M))|StopWhenRelativeResidualLess(c,1e-8), where $c = \lVert b \rVert_{}$ from the system to solve. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=ConjugateResidualState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • vector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components $ℝ^m,ℝ^n$
  • X=zero_vector(M,p): th initial gradient with respect to p.
  • Y=zero(μ): the initial gradient with respct to μ
  • Z=zero(λ): the initial gradient with respct to λ
  • W=zero(s): the initial gradient with respct to s

As well as internal keywords used to set up these given keywords like _step_M, _step_p, _sub_M, _sub_p, and _sub_X, that should not be changed.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective, respectively.

Note

The centrality_condition=mising disables to check centrality during the line search, but you can pass InteriorPointCentralityCondition(cmo, γ), where γ is a constant, to activate this check.

Output

The obtained approximate constrained minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.interior_point_Newton!Function
interior_point_Newton(M, f, grad_f, Hess_f, p=rand(M); kwargs...)
+X ∈ T_{p}\mathcal M, Y,W ∈ ℝ^m, Z ∈ ℝ^n,\]

see CondensedKKTVectorFieldJacobian and CondensedKKTVectorField, respectively, for the reduced form, this is usually solved in. From the resulting X and Z in the reeuced form, the other two, $Y$, $W$, are then computed.

From the gradient $(X,Y,Z,W)$ at the current iterate $(p, μ, λ, s)$, a line search is performed using the KKTVectorFieldNormSq norm of the KKT vector field (squared) and its gradient KKTVectorFieldNormSqGradient together with the InteriorPointCentralityCondition.

Note that since the vector field $F$ includes the gradients of the constraint functions $g, h$, its gradient or Jacobian requires the Hessians of the constraints.

For that seach direction a line search is performed, that additionally ensures that the constraints are further fulfilled.

Input

  • M: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$

or a ConstrainedManifoldObjective cmo containing f, grad_f, Hess_f, and the constraints

Keyword arguments

The keyword arguments related to the constraints (the first eleven) are ignored if you pass a ConstrainedManifoldObjective cmo

  • centrality_condition=missing; an additional condition when to accept a step size. This can be used to ensure that the resulting iterate is still an interior point if you provide a check (N,q) -> true/false, where N is the manifold of the step_problem.
  • equality_constraints=nothing: the number $n$ of equality constraints.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • g=nothing: the inequality constraints
  • grad_g=nothing: the gradient of the inequality constraints
  • grad_h=nothing: the gradient of the equality constraints
  • gradient_range=nothing: specify how gradients are represented, where nothing is equivalent to NestedPowerRepresentation
  • gradient_equality_range=gradient_range: specify how the gradients of the equality constraints are represented
  • gradient_inequality_range=gradient_range: specify how the gradients of the inequality constraints are represented
  • h=nothing: the equality constraints
  • Hess_g=nothing: the Hessian of the inequality constraints
  • Hess_h=nothing: the Hessian of the equality constraints
  • inequality_constraints=nothing: the number $m$ of inequality constraints.
  • λ=ones(length(h(M, p))): the Lagrange multiplier with respect to the equality constraints $h$
  • μ=ones(length(g(M, p))): the Lagrange multiplier with respect to the inequality constraints $g$
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • ρ=μ's / length(μ): store the orthogonality μ's/m to compute the barrier parameter β in the sub problem.
  • s=copy(μ): initial value for the slack variables
  • σ=calculate_σ(M, cmo, p, μ, λ, s): scaling factor for the barrier parameter β in the sub problem, which is updated during the iterations
  • step_objective: a ManifoldGradientObjective of the norm of the KKT vector field KKTVectorFieldNormSq and its gradient KKTVectorFieldNormSqGradient
  • step_problem: the manifold $\mathcal M × ℝ^m × ℝ^n × ℝ^m$ together with the step_objective as the problem the linesearch stepsize= employs for determining a step size
  • step_state: the StepsizeState with point and search direction
  • stepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size with the centrality_condtion keyword as additional criterion to accept a step, if this is provided
  • stopping_criterion=StopAfterIteration(200)|StopWhenKKTResidualLess(1e-8): a functor indicating that the stopping criterion is fulfilled a stopping criterion, by default depending on the residual of the KKT vector field or a maximal number of steps, which ever hits first.
  • sub_kwargs=(;): keyword arguments to decorate the sub options, for example debug, that automatically respects the main solvers debug options (like sub-sampling) as well
  • sub_objective: The SymmetricLinearSystemObjective modelling the system of equations to use in the sub solver, includes the CondensedKKTVectorFieldJacobian $\mathcal A(X)$ and the CondensedKKTVectorField $b$ in $\mathcal A(X) + b = 0$ we aim to solve. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_stopping_criterion=StopAfterIteration(manifold_dimension(M))|StopWhenRelativeResidualLess(c,1e-8), where $c = \lVert b \rVert_{}$ from the system to solve. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=ConjugateResidualState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • vector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components $ℝ^m,ℝ^n$
  • X=zero_vector(M,p): th initial gradient with respect to p.
  • Y=zero(μ): the initial gradient with respct to μ
  • Z=zero(λ): the initial gradient with respct to λ
  • W=zero(s): the initial gradient with respct to s

As well as internal keywords used to set up these given keywords like _step_M, _step_p, _sub_M, _sub_p, and _sub_X, that should not be changed.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective, respectively.

Note

The centrality_condition=mising disables to check centrality during the line search, but you can pass InteriorPointCentralityCondition(cmo, γ), where γ is a constant, to activate this check.

Output

The obtained approximate constrained minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.interior_point_Newton!Function
interior_point_Newton(M, f, grad_f, Hess_f, p=rand(M); kwargs...)
 interior_point_Newton(M, cmo::ConstrainedManifoldObjective, p=rand(M); kwargs...)
 interior_point_Newton!(M, f, grad]_f, Hess_f, p; kwargs...)
 interior_point_Newton(M, ConstrainedManifoldObjective, p; kwargs...)

perform the interior point Newton method following [LY24].

In order to solve the constrained problem

\[\begin{aligned} @@ -17,13 +17,13 @@ \quad & h_j(p)=0 \quad \text{ for } j=1,…,n, \end{aligned}\]

This algorithms iteratively solves the linear system based on extending the KKT system by a slack variable s.

\[\operatorname{J} F(p, μ, λ, s)[X, Y, Z, W] = -F(p, μ, λ, s), \text{ where } -X ∈ T_{p}\mathcal M, Y,W ∈ ℝ^m, Z ∈ ℝ^n,\]

see CondensedKKTVectorFieldJacobian and CondensedKKTVectorField, respectively, for the reduced form, this is usually solved in. From the resulting X and Z in the reeuced form, the other two, $Y$, $W$, are then computed.

From the gradient $(X,Y,Z,W)$ at the current iterate $(p, μ, λ, s)$, a line search is performed using the KKTVectorFieldNormSq norm of the KKT vector field (squared) and its gradient KKTVectorFieldNormSqGradient together with the InteriorPointCentralityCondition.

Note that since the vector field $F$ includes the gradients of the constraint functions $g, h$, its gradient or Jacobian requires the Hessians of the constraints.

For that seach direction a line search is performed, that additionally ensures that the constraints are further fulfilled.

Input

  • M: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$

or a ConstrainedManifoldObjective cmo containing f, grad_f, Hess_f, and the constraints

Keyword arguments

The keyword arguments related to the constraints (the first eleven) are ignored if you pass a ConstrainedManifoldObjective cmo

  • centrality_condition=missing; an additional condition when to accept a step size. This can be used to ensure that the resulting iterate is still an interior point if you provide a check (N,q) -> true/false, where N is the manifold of the step_problem.
  • equality_constraints=nothing: the number $n$ of equality constraints.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • g=nothing: the inequality constraints
  • grad_g=nothing: the gradient of the inequality constraints
  • grad_h=nothing: the gradient of the equality constraints
  • gradient_range=nothing: specify how gradients are represented, where nothing is equivalent to NestedPowerRepresentation
  • gradient_equality_range=gradient_range: specify how the gradients of the equality constraints are represented
  • gradient_inequality_range=gradient_range: specify how the gradients of the inequality constraints are represented
  • h=nothing: the equality constraints
  • Hess_g=nothing: the Hessian of the inequality constraints
  • Hess_h=nothing: the Hessian of the equality constraints
  • inequality_constraints=nothing: the number $m$ of inequality constraints.
  • λ=ones(length(h(M, p))): the Lagrange multiplier with respect to the equality constraints $h$
  • μ=ones(length(g(M, p))): the Lagrange multiplier with respect to the inequality constraints $g$
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • ρ=μ's / length(μ): store the orthogonality μ's/m to compute the barrier parameter β in the sub problem.
  • s=copy(μ): initial value for the slack variables
  • σ=calculate_σ(M, cmo, p, μ, λ, s): scaling factor for the barrier parameter β in the sub problem, which is updated during the iterations
  • step_objective: a ManifoldGradientObjective of the norm of the KKT vector field KKTVectorFieldNormSq and its gradient KKTVectorFieldNormSqGradient
  • step_problem: the manifold $\mathcal M × ℝ^m × ℝ^n × ℝ^m$ together with the step_objective as the problem the linesearch stepsize= employs for determining a step size
  • step_state: the StepsizeState with point and search direction
  • stepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size with the centrality_condtion keyword as additional criterion to accept a step, if this is provided
  • stopping_criterion=StopAfterIteration(200)|StopWhenKKTResidualLess(1e-8): a functor indicating that the stopping criterion is fulfilled a stopping criterion, by default depending on the residual of the KKT vector field or a maximal number of steps, which ever hits first.
  • sub_kwargs=(;): keyword arguments to decorate the sub options, for example debug, that automatically respects the main solvers debug options (like sub-sampling) as well
  • sub_objective: The SymmetricLinearSystemObjective modelling the system of equations to use in the sub solver, includes the CondensedKKTVectorFieldJacobian $\mathcal A(X)$ and the CondensedKKTVectorField $b$ in $\mathcal A(X) + b = 0$ we aim to solve. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_stopping_criterion=StopAfterIteration(manifold_dimension(M))|StopWhenRelativeResidualLess(c,1e-8), where $c = \lVert b \rVert_{}$ from the system to solve. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=ConjugateResidualState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • vector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components $ℝ^m,ℝ^n$
  • X=zero_vector(M,p): th initial gradient with respect to p.
  • Y=zero(μ): the initial gradient with respct to μ
  • Z=zero(λ): the initial gradient with respct to λ
  • W=zero(s): the initial gradient with respct to s

As well as internal keywords used to set up these given keywords like _step_M, _step_p, _sub_M, _sub_p, and _sub_X, that should not be changed.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective, respectively.

Note

The centrality_condition=mising disables to check centrality during the line search, but you can pass InteriorPointCentralityCondition(cmo, γ), where γ is a constant, to activate this check.

Output

The obtained approximate constrained minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.InteriorPointNewtonStateType
InteriorPointNewtonState{P,T} <: AbstractHessianSolverState

Fields

  • λ: the Lagrange multiplier with respect to the equality constraints
  • μ: the Lagrange multiplier with respect to the inequality constraints
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • s: the current slack variable
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • X: the current gradient with respect to p
  • Y: the current gradient with respect to μ
  • Z: the current gradient with respect to λ
  • W: the current gradient with respect to s
  • ρ: store the orthogonality μ's/m to compute the barrier parameter β in the sub problem
  • σ: scaling factor for the barrier parameter β in the sub problem
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • step_problem: an AbstractManoptProblem storing the manifold and objective for the line search
  • step_state: storing iterate and search direction in a state for the line search, see StepsizeState

Constructor

InteriorPointNewtonState(
+X ∈ T_{p}\mathcal M, Y,W ∈ ℝ^m, Z ∈ ℝ^n,\]

see CondensedKKTVectorFieldJacobian and CondensedKKTVectorField, respectively, for the reduced form, this is usually solved in. From the resulting X and Z in the reeuced form, the other two, $Y$, $W$, are then computed.

From the gradient $(X,Y,Z,W)$ at the current iterate $(p, μ, λ, s)$, a line search is performed using the KKTVectorFieldNormSq norm of the KKT vector field (squared) and its gradient KKTVectorFieldNormSqGradient together with the InteriorPointCentralityCondition.

Note that since the vector field $F$ includes the gradients of the constraint functions $g, h$, its gradient or Jacobian requires the Hessians of the constraints.

For that seach direction a line search is performed, that additionally ensures that the constraints are further fulfilled.

Input

  • M: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$

or a ConstrainedManifoldObjective cmo containing f, grad_f, Hess_f, and the constraints

Keyword arguments

The keyword arguments related to the constraints (the first eleven) are ignored if you pass a ConstrainedManifoldObjective cmo

  • centrality_condition=missing; an additional condition when to accept a step size. This can be used to ensure that the resulting iterate is still an interior point if you provide a check (N,q) -> true/false, where N is the manifold of the step_problem.
  • equality_constraints=nothing: the number $n$ of equality constraints.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • g=nothing: the inequality constraints
  • grad_g=nothing: the gradient of the inequality constraints
  • grad_h=nothing: the gradient of the equality constraints
  • gradient_range=nothing: specify how gradients are represented, where nothing is equivalent to NestedPowerRepresentation
  • gradient_equality_range=gradient_range: specify how the gradients of the equality constraints are represented
  • gradient_inequality_range=gradient_range: specify how the gradients of the inequality constraints are represented
  • h=nothing: the equality constraints
  • Hess_g=nothing: the Hessian of the inequality constraints
  • Hess_h=nothing: the Hessian of the equality constraints
  • inequality_constraints=nothing: the number $m$ of inequality constraints.
  • λ=ones(length(h(M, p))): the Lagrange multiplier with respect to the equality constraints $h$
  • μ=ones(length(g(M, p))): the Lagrange multiplier with respect to the inequality constraints $g$
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • ρ=μ's / length(μ): store the orthogonality μ's/m to compute the barrier parameter β in the sub problem.
  • s=copy(μ): initial value for the slack variables
  • σ=calculate_σ(M, cmo, p, μ, λ, s): scaling factor for the barrier parameter β in the sub problem, which is updated during the iterations
  • step_objective: a ManifoldGradientObjective of the norm of the KKT vector field KKTVectorFieldNormSq and its gradient KKTVectorFieldNormSqGradient
  • step_problem: the manifold $\mathcal M × ℝ^m × ℝ^n × ℝ^m$ together with the step_objective as the problem the linesearch stepsize= employs for determining a step size
  • step_state: the StepsizeState with point and search direction
  • stepsize=ArmijoLinesearch(): a functor inheriting from Stepsize to determine a step size with the centrality_condtion keyword as additional criterion to accept a step, if this is provided
  • stopping_criterion=StopAfterIteration(200)|StopWhenKKTResidualLess(1e-8): a functor indicating that the stopping criterion is fulfilled a stopping criterion, by default depending on the residual of the KKT vector field or a maximal number of steps, which ever hits first.
  • sub_kwargs=(;): keyword arguments to decorate the sub options, for example debug, that automatically respects the main solvers debug options (like sub-sampling) as well
  • sub_objective: The SymmetricLinearSystemObjective modelling the system of equations to use in the sub solver, includes the CondensedKKTVectorFieldJacobian $\mathcal A(X)$ and the CondensedKKTVectorField $b$ in $\mathcal A(X) + b = 0$ we aim to solve. This is used to define the sub_problem= keyword and has hence no effect, if you set sub_problem directly.
  • sub_stopping_criterion=StopAfterIteration(manifold_dimension(M))|StopWhenRelativeResidualLess(c,1e-8), where $c = \lVert b \rVert_{}$ from the system to solve. This is used to define the sub_state= keyword and has hence no effect, if you set sub_state directly.
  • sub_problem=DefaultManoptProblem(M, sub_objective): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=ConjugateResidualState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • vector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components $ℝ^m,ℝ^n$
  • X=zero_vector(M,p): th initial gradient with respect to p.
  • Y=zero(μ): the initial gradient with respct to μ
  • Z=zero(λ): the initial gradient with respct to λ
  • W=zero(s): the initial gradient with respct to s

As well as internal keywords used to set up these given keywords like _step_M, _step_p, _sub_M, _sub_p, and _sub_X, that should not be changed.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective, respectively.

Note

The centrality_condition=mising disables to check centrality during the line search, but you can pass InteriorPointCentralityCondition(cmo, γ), where γ is a constant, to activate this check.

Output

The obtained approximate constrained minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.InteriorPointNewtonStateType
InteriorPointNewtonState{P,T} <: AbstractHessianSolverState

Fields

  • λ: the Lagrange multiplier with respect to the equality constraints
  • μ: the Lagrange multiplier with respect to the inequality constraints
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • s: the current slack variable
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • X: the current gradient with respect to p
  • Y: the current gradient with respect to μ
  • Z: the current gradient with respect to λ
  • W: the current gradient with respect to s
  • ρ: store the orthogonality μ's/m to compute the barrier parameter β in the sub problem
  • σ: scaling factor for the barrier parameter β in the sub problem
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • step_problem: an AbstractManoptProblem storing the manifold and objective for the line search
  • step_state: storing iterate and search direction in a state for the line search, see StepsizeState

Constructor

InteriorPointNewtonState(
     M::AbstractManifold,
     cmo::ConstrainedManifoldObjective,
     sub_problem::Pr,
     sub_state::St;
     kwargs...
-)

Initialize the state, where both the AbstractManifold and the ConstrainedManifoldObjective are used to fill in reasonable defaults for the keywords.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • cmo: a ConstrainedManifoldObjective
  • sub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Keyword arguments

Let m and n denote the number of inequality and equality constraints, respectively

and internally _step_M and _step_p for the manifold and point in the stepsize.

source

Subproblem functions

Manopt.CondensedKKTVectorFieldType
CondensedKKTVectorField{O<:ConstrainedManifoldObjective,T,R} <: AbstractConstrainedSlackFunctor{T,R}

Given the constrained optimization problem

\[\begin{aligned} +)

Initialize the state, where both the AbstractManifold and the ConstrainedManifoldObjective are used to fill in reasonable defaults for the keywords.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • cmo: a ConstrainedManifoldObjective
  • sub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Keyword arguments

Let m and n denote the number of inequality and equality constraints, respectively

and internally _step_M and _step_p for the manifold and point in the stepsize.

source

Subproblem functions

Manopt.CondensedKKTVectorFieldType
CondensedKKTVectorField{O<:ConstrainedManifoldObjective,T,R} <: AbstractConstrainedSlackFunctor{T,R}

Given the constrained optimization problem

\[\begin{aligned} \min_{p ∈\mathcal{M}} &f(p)\\ \text{subject to } &g_i(p)\leq 0 \quad \text{ for } i= 1, …, m,\\ \quad &h_j(p)=0 \quad \text{ for } j=1,…,n, @@ -35,7 +35,7 @@ μ_i(g_i(p)+s_i) + β - μ_is_i \bigr)\operatorname{grad} g_i(p)\\ h(p) -\end{pmatrix}\]

Fields

  • cmo the ConstrainedManifoldObjective
  • μ::T the vector in $ℝ^m$ of coefficients for the inequality constraints
  • s::T the vector in $ℝ^m$ of sclack variables
  • β::R the barrier parameter $β∈ℝ$

Constructor

CondensedKKTVectorField(cmo, μ, s, β)
source
Manopt.CondensedKKTVectorFieldJacobianType
CondensedKKTVectorFieldJacobian{O<:ConstrainedManifoldObjective,T,R}  <: AbstractConstrainedSlackFunctor{T,R}

Given the constrained optimization problem

\[\begin{aligned} +\end{pmatrix}\]

Fields

  • cmo the ConstrainedManifoldObjective
  • μ::T the vector in $ℝ^m$ of coefficients for the inequality constraints
  • s::T the vector in $ℝ^m$ of sclack variables
  • β::R the barrier parameter $β∈ℝ$

Constructor

CondensedKKTVectorField(cmo, μ, s, β)
source
Manopt.CondensedKKTVectorFieldJacobianType
CondensedKKTVectorFieldJacobian{O<:ConstrainedManifoldObjective,T,R}  <: AbstractConstrainedSlackFunctor{T,R}

Given the constrained optimization problem

\[\begin{aligned} \min_{p ∈\mathcal{M}} &f(p)\\ \text{subject to } &g_i(p)\leq 0 \quad \text{ for } i= 1, …, m,\\ \quad &h_j(p)=0 \quad \text{ for } j=1,…,n, @@ -45,34 +45,34 @@ + \displaystyle\sum_{j=1}^n Y_j \operatorname{grad} h_j(p) \\ \Bigl( ⟨\operatorname{grad} h_j(p), X⟩ \Bigr)_{j=1}^n -\end{pmatrix}\]

Fields

  • cmo the ConstrainedManifoldObjective
  • μ::V the vector in $ℝ^m$ of coefficients for the inequality constraints
  • s::V the vector in $ℝ^m$ of slack variables
  • β::R the barrier parameter $β∈ℝ$

Constructor

CondensedKKTVectorFieldJacobian(cmo, μ, s, β)
source
Manopt.KKTVectorFieldType
KKTVectorField{O<:ConstrainedManifoldObjective}

Implement the vectorfield $F$ KKT-conditions, inlcuding a slack variable for the inequality constraints.

Given the LagrangianCost

\[\mathcal L(p; μ, λ) = f(p) + \sum_{i=1}^m μ_ig_i(p) + \sum_{j=1}^n λ_jh_j(p)\]

the LagrangianGradient

\[\operatorname{grad}\mathcal L(p, μ, λ) = \operatorname{grad}f(p) + \sum_{j=1}^n λ_j \operatorname{grad} h_j(p) + \sum_{i=1}^m μ_i \operatorname{grad} g_i(p),\]

and introducing the slack variables $s=-g(p) ∈ ℝ^m$ the vector field is given by

\[F(p, μ, λ, s) = \begin{pmatrix} +\end{pmatrix}\]

Fields

  • cmo the ConstrainedManifoldObjective
  • μ::V the vector in $ℝ^m$ of coefficients for the inequality constraints
  • s::V the vector in $ℝ^m$ of slack variables
  • β::R the barrier parameter $β∈ℝ$

Constructor

CondensedKKTVectorFieldJacobian(cmo, μ, s, β)
source
Manopt.KKTVectorFieldType
KKTVectorField{O<:ConstrainedManifoldObjective}

Implement the vectorfield $F$ KKT-conditions, inlcuding a slack variable for the inequality constraints.

Given the LagrangianCost

\[\mathcal L(p; μ, λ) = f(p) + \sum_{i=1}^m μ_ig_i(p) + \sum_{j=1}^n λ_jh_j(p)\]

the LagrangianGradient

\[\operatorname{grad}\mathcal L(p, μ, λ) = \operatorname{grad}f(p) + \sum_{j=1}^n λ_j \operatorname{grad} h_j(p) + \sum_{i=1}^m μ_i \operatorname{grad} g_i(p),\]

and introducing the slack variables $s=-g(p) ∈ ℝ^m$ the vector field is given by

\[F(p, μ, λ, s) = \begin{pmatrix} \operatorname{grad}_p \mathcal L(p, μ, λ)\\ g(p) + s\\ h(p)\\ μ ⊙ s -\end{pmatrix}, \text{ where } p \in \mathcal M, μ, s \in ℝ^m\text{ and } λ \in ℝ^n,\]

where $⊙$ denotes the Hadamard (or elementwise) product

Fields

While the point p is arbitrary and usually not needed, it serves as internal memory in the computations. Furthermore Both fields together also calrify the product manifold structure to use.

Constructor

KKTVectorField(cmo::ConstrainedManifoldObjective)

Example

Define F = KKTVectorField(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of $\mathcal M×ℝ^m×ℝ^n×ℝ^m$. Then, you can call this cost as F(N, q) or as the in-place variant F(N, Y, q), where q is a point on N and Y is a tangent vector at q for the result.

source
Manopt.KKTVectorFieldJacobianType
KKTVectorFieldJacobian{O<:ConstrainedManifoldObjective}

Implement the Jacobian of the vector field $F$ of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField and KKTVectorFieldAdjointJacobian..

\[\operatorname{J} F(p, μ, λ, s)[X, Y, Z, W] = \begin{pmatrix} +\end{pmatrix}, \text{ where } p \in \mathcal M, μ, s \in ℝ^m\text{ and } λ \in ℝ^n,\]

where $⊙$ denotes the Hadamard (or elementwise) product

Fields

While the point p is arbitrary and usually not needed, it serves as internal memory in the computations. Furthermore Both fields together also calrify the product manifold structure to use.

Constructor

KKTVectorField(cmo::ConstrainedManifoldObjective)

Example

Define F = KKTVectorField(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of $\mathcal M×ℝ^m×ℝ^n×ℝ^m$. Then, you can call this cost as F(N, q) or as the in-place variant F(N, Y, q), where q is a point on N and Y is a tangent vector at q for the result.

source
Manopt.KKTVectorFieldJacobianType
KKTVectorFieldJacobian{O<:ConstrainedManifoldObjective}

Implement the Jacobian of the vector field $F$ of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField and KKTVectorFieldAdjointJacobian..

\[\operatorname{J} F(p, μ, λ, s)[X, Y, Z, W] = \begin{pmatrix} \operatorname{Hess}_p \mathcal L(p, μ, λ)[X] + \displaystyle\sum_{i=1}^m Y_i \operatorname{grad} g_i(p) + \displaystyle\sum_{j=1}^n Z_j \operatorname{grad} h_j(p)\\ \Bigl( ⟨\operatorname{grad} g_i(p), X⟩ + W_i\Bigr)_{i=1}^m\\ \Bigl( ⟨\operatorname{grad} h_j(p), X⟩ \Bigr)_{j=1}^n\\ μ ⊙ W + s ⊙ Y -\end{pmatrix},\]

where $⊙$ denotes the Hadamard (or elementwise) product

See also the LagrangianHessian $\operatorname{Hess}_p \mathcal L(p, μ, λ)[X]$.

Fields

Constructor

KKTVectorFieldJacobian(cmo::ConstrainedManifoldObjective)

Generate the Jacobian of the KKT vector field related to some ConstrainedManifoldObjective cmo.

Example

Define JF = KKTVectorFieldJacobian(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of $\mathcal M×ℝ^m×ℝ^n×ℝ^m$. Then, you can call this cost as JF(N, q, Y) or as the in-place variant JF(N, Z, q, Y), where q is a point on N and Y and Z are a tangent vector at q.

source
Manopt.KKTVectorFieldAdjointJacobianType
KKTVectorFieldAdjointJacobian{O<:ConstrainedManifoldObjective}

Implement the Adjoint of the Jacobian of the vector field $F$ of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField and KKTVectorFieldJacobian.

\[\operatorname{J}^* F(p, μ, λ, s)[X, Y, Z, W] = \begin{pmatrix} +\end{pmatrix},\]

where $⊙$ denotes the Hadamard (or elementwise) product

See also the LagrangianHessian $\operatorname{Hess}_p \mathcal L(p, μ, λ)[X]$.

Fields

Constructor

KKTVectorFieldJacobian(cmo::ConstrainedManifoldObjective)

Generate the Jacobian of the KKT vector field related to some ConstrainedManifoldObjective cmo.

Example

Define JF = KKTVectorFieldJacobian(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of $\mathcal M×ℝ^m×ℝ^n×ℝ^m$. Then, you can call this cost as JF(N, q, Y) or as the in-place variant JF(N, Z, q, Y), where q is a point on N and Y and Z are a tangent vector at q.

source
Manopt.KKTVectorFieldAdjointJacobianType
KKTVectorFieldAdjointJacobian{O<:ConstrainedManifoldObjective}

Implement the Adjoint of the Jacobian of the vector field $F$ of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField and KKTVectorFieldJacobian.

\[\operatorname{J}^* F(p, μ, λ, s)[X, Y, Z, W] = \begin{pmatrix} \operatorname{Hess}_p \mathcal L(p, μ, λ)[X] + \displaystyle\sum_{i=1}^m Y_i \operatorname{grad} g_i(p) + \displaystyle\sum_{j=1}^n Z_j \operatorname{grad} h_j(p)\\ \Bigl( ⟨\operatorname{grad} g_i(p), X⟩ + s_iW_i\Bigr)_{i=1}^m\\ \Bigl( ⟨\operatorname{grad} h_j(p), X⟩ \Bigr)_{j=1}^n\\ μ ⊙ W + Y -\end{pmatrix},\]

where $⊙$ denotes the Hadamard (or elementwise) product

See also the LagrangianHessian $\operatorname{Hess}_p \mathcal L(p, μ, λ)[X]$.

Fields

Constructor

KKTVectorFieldAdjointJacobian(cmo::ConstrainedManifoldObjective)

Generate the Adjoint Jacobian of the KKT vector field related to some ConstrainedManifoldObjective cmo.

Example

Define AdJF = KKTVectorFieldAdjointJacobian(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of $\mathcal M×ℝ^m×ℝ^n×ℝ^m$. Then, you can call this cost as AdJF(N, q, Y) or as the in-place variant AdJF(N, Z, q, Y), where q is a point on N and Y and Z are a tangent vector at q.

source
Manopt.KKTVectorFieldNormSqType
KKTVectorFieldNormSq{O<:ConstrainedManifoldObjective}

Implement the square of the norm of the vectorfield $F$ of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField, where this functor applies the norm to. In [LY24] this is called the merit function.

Fields

Constructor

KKTVectorFieldNormSq(cmo::ConstrainedManifoldObjective)

Example

Define f = KKTVectorFieldNormSq(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of $\mathcal M×ℝ^m×ℝ^n×ℝ^m$. Then, you can call this cost as f(N, q), where q is a point on N.

source
Manopt.KKTVectorFieldNormSqGradientType
KKTVectorFieldNormSqGradient{O<:ConstrainedManifoldObjective}

Compute the gradient of the KKTVectorFieldNormSq $φ(p,μ,λ,s) = \lVert F(p,μ,λ,s)\rVert^2$, that is of the norm squared of the KKTVectorField $F$.

This is given in [LY24] as the gradient of their merit function, which we can write with the adjoint $J^*$ of the Jacobian

\[\operatorname{grad} φ = 2\operatorname{J}^* F(p, μ, λ, s)[F(p, μ, λ, s)],\]

and hence is computed with KKTVectorFieldAdjointJacobian and KKTVectorField.

For completeness, the gradient reads, using the LagrangianGradient $L = \operatorname{grad}_p \mathcal L(p,μ,λ) ∈ T_p\mathcal M$, for a shorthand of the first component of $F$, as

\[\operatorname{grad} φ +\end{pmatrix},\]

where $⊙$ denotes the Hadamard (or elementwise) product

See also the LagrangianHessian $\operatorname{Hess}_p \mathcal L(p, μ, λ)[X]$.

Fields

Constructor

KKTVectorFieldAdjointJacobian(cmo::ConstrainedManifoldObjective)

Generate the Adjoint Jacobian of the KKT vector field related to some ConstrainedManifoldObjective cmo.

Example

Define AdJF = KKTVectorFieldAdjointJacobian(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of $\mathcal M×ℝ^m×ℝ^n×ℝ^m$. Then, you can call this cost as AdJF(N, q, Y) or as the in-place variant AdJF(N, Z, q, Y), where q is a point on N and Y and Z are a tangent vector at q.

source
Manopt.KKTVectorFieldNormSqType
KKTVectorFieldNormSq{O<:ConstrainedManifoldObjective}

Implement the square of the norm of the vectorfield $F$ of the KKT-conditions, inlcuding a slack variable for the inequality constraints, see KKTVectorField, where this functor applies the norm to. In [LY24] this is called the merit function.

Fields

Constructor

KKTVectorFieldNormSq(cmo::ConstrainedManifoldObjective)

Example

Define f = KKTVectorFieldNormSq(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of $\mathcal M×ℝ^m×ℝ^n×ℝ^m$. Then, you can call this cost as f(N, q), where q is a point on N.

source
Manopt.KKTVectorFieldNormSqGradientType
KKTVectorFieldNormSqGradient{O<:ConstrainedManifoldObjective}

Compute the gradient of the KKTVectorFieldNormSq $φ(p,μ,λ,s) = \lVert F(p,μ,λ,s)\rVert^2$, that is of the norm squared of the KKTVectorField $F$.

This is given in [LY24] as the gradient of their merit function, which we can write with the adjoint $J^*$ of the Jacobian

\[\operatorname{grad} φ = 2\operatorname{J}^* F(p, μ, λ, s)[F(p, μ, λ, s)],\]

and hence is computed with KKTVectorFieldAdjointJacobian and KKTVectorField.

For completeness, the gradient reads, using the LagrangianGradient $L = \operatorname{grad}_p \mathcal L(p,μ,λ) ∈ T_p\mathcal M$, for a shorthand of the first component of $F$, as

\[\operatorname{grad} φ = 2 \begin{pmatrix} \operatorname{grad}_p \mathcal L(p,μ,λ)[L] + (g_i(p) + s_i)\operatorname{grad} g_i(p) + h_j(p)\operatorname{grad} h_j(p)\\ \Bigl( ⟨\operatorname{grad} g_i(p), L⟩ + s_i\Bigr)_{i=1}^m + μ ⊙ s ⊙ s\\ \Bigl( ⟨\operatorname{grad} h_j(p), L⟩ \Bigr)_{j=1}^n\\ g + s + μ ⊙ μ ⊙ s -\end{pmatrix},\]

where $⊙$ denotes the Hadamard (or elementwise) product.

Fields

Constructor

KKTVectorFieldNormSqGradient(cmo::ConstrainedManifoldObjective)

Example

Define grad_f = KKTVectorFieldNormSqGradient(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of $\mathcal M×ℝ^m×ℝ^n×ℝ^m$. Then, you can call this cost as grad_f(N, q) or as the in-place variant grad_f(N, Y, q), where q is a point on N and Y is a tangent vector at q returning the resulting gradient at.

source

Helpers

Manopt.InteriorPointCentralityConditionType
InteriorPointCentralityCondition{CO,R}

A functor to check the centrality condition.

In order to obtain a step in the linesearch performed within the interior_point_Newton, Section 6 of [LY24] propose the following additional conditions to hold inspired by the Euclidean case described in Section 6 [ETTZ96]:

For a given ConstrainedManifoldObjective assume consider the KKTVectorField $F$, that is we are at a point $q = (p, λ, μ, s)$ on $\mathcal M × ℝ^m × ℝ^n × ℝ^m$and a search direction $V = (X, Y, Z, W)$.

Then, let

\[τ_1 = \frac{m⋅\min\{ μ ⊙ s\}}{μ^{\mathrm{T}}s} +\end{pmatrix},\]

where $⊙$ denotes the Hadamard (or elementwise) product.

Fields

Constructor

KKTVectorFieldNormSqGradient(cmo::ConstrainedManifoldObjective)

Example

Define grad_f = KKTVectorFieldNormSqGradient(cmo) for some ConstrainedManifoldObjective cmo and let N be the product manifold of $\mathcal M×ℝ^m×ℝ^n×ℝ^m$. Then, you can call this cost as grad_f(N, q) or as the in-place variant grad_f(N, Y, q), where q is a point on N and Y is a tangent vector at q returning the resulting gradient at.

source

Helpers

Manopt.InteriorPointCentralityConditionType
InteriorPointCentralityCondition{CO,R}

A functor to check the centrality condition.

In order to obtain a step in the linesearch performed within the interior_point_Newton, Section 6 of [LY24] propose the following additional conditions to hold inspired by the Euclidean case described in Section 6 [ETTZ96]:

For a given ConstrainedManifoldObjective assume consider the KKTVectorField $F$, that is we are at a point $q = (p, λ, μ, s)$ on $\mathcal M × ℝ^m × ℝ^n × ℝ^m$and a search direction $V = (X, Y, Z, W)$.

Then, let

\[τ_1 = \frac{m⋅\min\{ μ ⊙ s\}}{μ^{\mathrm{T}}s} \quad\text{ and }\quad τ_2 = \frac{μ^{\mathrm{T}}s}{\lVert F(q) \rVert}\]

where $⊙$ denotes the Hadamard (or elementwise) product.

For a new candidate $q(α) = \bigl(p(α), λ(α), μ(α), s(α)\bigr) := (\operatorname{retr}_p(αX), λ+αY, μ+αZ, s+αW)$, we then define two functions

\[c_1(α) = \min\{ μ(α) ⊙ s(α) \} - \frac{γτ_1 μ(α)^{\mathrm{T}}s(α)}{m} \quad\text{ and }\quad c_2(α) = μ(α)^{\mathrm{T}}s(α) – γτ_2 \lVert F(q(α)) \rVert.\]

While the paper now states that the (Armijo) linesearch starts at a point $\tilde α$, it is easier to include the condition that $c_1(α) ≥ 0$ and $c_2(α) ≥ 0$ into the linesearch as well.

The functor InteriorPointCentralityCondition(cmo, γ, μ, s, normKKT)(N,qα) defined here evaluates this condition and returns true if both $c_1$ and $c_2$ are nonnegative.

Fields

Constructor

InteriorPointCentralityCondition(cmo, γ)
-InteriorPointCentralityCondition(cmo, γ, τ1, τ2)

Initialise the centrality conditions. The parameters τ1, τ2 are initialise to zero if not provided.

Note

Besides get_parameter for all three constants, and set_parameter! for $γ$, to update $τ_1$ and $τ_2$, call set_parameter(ipcc, :τ, N, q) to update both $τ_1$ and $τ_2$ according to the formulae above.

source
Manopt.calculate_σFunction
calculate_σ(M, cmo, p, μ, λ, s; kwargs...)

Compute the new $σ$ factor for the barrier parameter in interior_point_Newton as

\[\min\{\frac{1}{2}, \lVert F(p; μ, λ, s)\rVert^{\frac{1}{2}} \},\]

where $F$ is the KKT vector field, hence the KKTVectorFieldNormSq is used.

Keyword arguments

  • vector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components $ℝ^m,ℝ^n$
  • N the manifold $\mathcal M × ℝ^m × ℝ^n × ℝ^m$ the vector field lives on (generated using vector_space)
  • q provide memory on N for interims evaluation of the vector field
source

Additional stopping criteria

Manopt.StopWhenKKTResidualLessType
StopWhenKKTResidualLess <: StoppingCriterion

Stop when the KKT residual

r^2
+InteriorPointCentralityCondition(cmo, γ, τ1, τ2)

Initialise the centrality conditions. The parameters τ1, τ2 are initialise to zero if not provided.

Note

Besides get_parameter for all three constants, and set_parameter! for $γ$, to update $τ_1$ and $τ_2$, call set_parameter(ipcc, :τ, N, q) to update both $τ_1$ and $τ_2$ according to the formulae above.

source
Manopt.calculate_σFunction
calculate_σ(M, cmo, p, μ, λ, s; kwargs...)

Compute the new $σ$ factor for the barrier parameter in interior_point_Newton as

\[\min\{\frac{1}{2}, \lVert F(p; μ, λ, s)\rVert^{\frac{1}{2}} \},\]

where $F$ is the KKT vector field, hence the KKTVectorFieldNormSq is used.

Keyword arguments

  • vector_space=Rn a function that, given an integer, returns the manifold to be used for the vector space components $ℝ^m,ℝ^n$
  • N the manifold $\mathcal M × ℝ^m × ℝ^n × ℝ^m$ the vector field lives on (generated using vector_space)
  • q provide memory on N for interims evaluation of the vector field
source

Additional stopping criteria

Manopt.StopWhenKKTResidualLessType
StopWhenKKTResidualLess <: StoppingCriterion

Stop when the KKT residual

r^2
 = \lVert \operatorname{grad}_p \mathcal L(p, μ, λ) \rVert^2
 + \sum_{i=1}^m [μ_i]_{-}^2 + [g_i(p)]_+^2 + \lvert \mu_ig_i(p)^2
-+ \sum_{j=1}^n \lvert h_i(p)\rvert^2.

is less than a given threshold $r < ε$. We use $[v]_+ = \max\{0,v\}$ and $[v]_- = \min\{0,t\}$ for the positive and negative part of $v$, respectively

Fields

  • ε: a threshold
  • residual: store the last residual if the stopping criterion is hit.
  • at_iteration:
source

References

[ETTZ96]
A. S. El-Bakry, R. A. Tapia, T. Tsuchiya and Y. Zhang. On the formulation and theory of the Newton interior-point method for nonlinear programming. Journal of Optimization Theory and Applications 89, 507–541 (1996).
[LY24]
Z. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.
++ \sum_{j=1}^n \lvert h_i(p)\rvert^2.

is less than a given threshold $r < ε$. We use $[v]_+ = \max\{0,v\}$ and $[v]_- = \min\{0,t\}$ for the positive and negative part of $v$, respectively

Fields

  • ε: a threshold
  • residual: store the last residual if the stopping criterion is hit.
  • at_iteration:
source

References

[ETTZ96]
A. S. El-Bakry, R. A. Tapia, T. Tsuchiya and Y. Zhang. On the formulation and theory of the Newton interior-point method for nonlinear programming. Journal of Optimization Theory and Applications 89, 507–541 (1996).
[LY24]
Z. Lai and A. Yoshise. Riemannian Interior Point Methods for Constrained Optimization on Manifolds. Journal of Optimization Theory and Applications 201, 433–469 (2024), arXiv:2203.09762.
diff --git a/dev/solvers/particle_swarm/index.html b/dev/solvers/particle_swarm/index.html index 931dd4d596..5fa250a47a 100644 --- a/dev/solvers/particle_swarm/index.html +++ b/dev/solvers/particle_swarm/index.html @@ -10,7 +10,7 @@ \end{cases}\]

and the global best position

\[g^{(i+1)} = \begin{cases} p_k^{(i+1)}, & \text{if } F(p_k^{(i+1)})<F(g_{k}^{(i)}),\\ g_{k}^{(i)}, & \text{else,} -\end{cases}\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • swarm = [rand(M) for _ in 1:swarm_size]: an initial swarm of points.

Instead of a cost function f you can also provide an AbstractManifoldCostObjective mco.

Keyword Arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively. If you provide the objective directly, these decorations can still be specified

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.particle_swarm!Function
patricle_swarm(M, f; kwargs...)
+\end{cases}\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • swarm = [rand(M) for _ in 1:swarm_size]: an initial swarm of points.

Instead of a cost function f you can also provide an AbstractManifoldCostObjective mco.

Keyword Arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively. If you provide the objective directly, these decorations can still be specified

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.particle_swarm!Function
patricle_swarm(M, f; kwargs...)
 patricle_swarm(M, f, swarm; kwargs...)
 patricle_swarm(M, mco::AbstractManifoldCostObjective; kwargs..)
 patricle_swarm(M, mco::AbstractManifoldCostObjective, swarm; kwargs..)
@@ -21,4 +21,4 @@
 \end{cases}\]

and the global best position

\[g^{(i+1)} = \begin{cases} p_k^{(i+1)}, & \text{if } F(p_k^{(i+1)})<F(g_{k}^{(i)}),\\ g_{k}^{(i)}, & \text{else,} -\end{cases}\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • swarm = [rand(M) for _ in 1:swarm_size]: an initial swarm of points.

Instead of a cost function f you can also provide an AbstractManifoldCostObjective mco.

Keyword Arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively. If you provide the objective directly, these decorations can still be specified

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ParticleSwarmStateType
ParticleSwarmState{P,T} <: AbstractManoptSolverState

Describes a particle swarm optimizing algorithm, with

Fields

  • cognitive_weight: a cognitive weight factor
  • inertia: the inertia of the particles
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • social_weight: a social weight factor
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • velocity: a set of tangent vectors (of type AbstractVector{T}) representing the velocities of the particles

Internal and temporary fields

  • cognitive_vector: temporary storage for a tangent vector related to cognitive_weight
  • p::P: a point on the manifold $\mathcal M$ storing the best point visited by all particles
  • positional_best: storing the best position $p_i$ every single swarm participant visited
  • q::P: a point on the manifold $\mathcal M$ serving as temporary storage for interims results; avoids allocations
  • social_vec: temporary storage for a tangent vector related to social_weight
  • swarm: a set of points (of type AbstractVector{P}) on a manifold $\{a_i\}_{i=1}^{N}$

Constructor

ParticleSwarmState(M, initial_swarm, velocity; kawrgs...)

construct a particle swarm solver state for the manifold M starting with the initial population initial_swarm with velocities. The p used in the following defaults is the type of one point from the swarm.

Keyword arguments

See also

particle_swarm

source

Stopping criteria

Manopt.StopWhenSwarmVelocityLessType
StopWhenSwarmVelocityLess <: StoppingCriterion

Stopping criterion for particle_swarm, when the velocity of the swarm is less than a threshold.

Fields

  • threshold: the threshold
  • at_iteration: store the iteration the stopping criterion was (last) fulfilled
  • reason: store the reason why the stopping criterion was fulfilled, see get_reason
  • velocity_norms: interim vector to store the norms of the velocities before computing its norm

Constructor

StopWhenSwarmVelocityLess(tolerance::Float64)

initialize the stopping criterion to a certain tolerance.

source

Technical details

The particle_swarm solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.
  • An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= does not have to be specified.
  • A vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= does not have to be specified.
  • By default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.
  • Tangent vectors storing the social and cognitive vectors are initialized calling zero_vector(M,p).
  • A `copyto!(M, q, p) and copy(M,p) for points.
  • The distance(M, p, q) when using the default stopping criterion, which uses StopWhenChangeLess.

Literature

[BIA10]
P. B. Borckmans, M. Ishteva and P.-A. Absil. A Modified Particle Swarm Optimization Algorithm for the Best Low Multilinear Rank Approximation of Higher-Order Tensors. In: 7th International Conference on Swarm INtelligence (Springer Berlin Heidelberg, 2010); pp. 13–23.
+\end{cases}\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • swarm = [rand(M) for _ in 1:swarm_size]: an initial swarm of points.

Instead of a cost function f you can also provide an AbstractManifoldCostObjective mco.

Keyword Arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively. If you provide the objective directly, these decorations can still be specified

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ParticleSwarmStateType
ParticleSwarmState{P,T} <: AbstractManoptSolverState

Describes a particle swarm optimizing algorithm, with

Fields

  • cognitive_weight: a cognitive weight factor
  • inertia: the inertia of the particles
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • social_weight: a social weight factor
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • velocity: a set of tangent vectors (of type AbstractVector{T}) representing the velocities of the particles

Internal and temporary fields

  • cognitive_vector: temporary storage for a tangent vector related to cognitive_weight
  • p::P: a point on the manifold $\mathcal M$ storing the best point visited by all particles
  • positional_best: storing the best position $p_i$ every single swarm participant visited
  • q::P: a point on the manifold $\mathcal M$ serving as temporary storage for interims results; avoids allocations
  • social_vec: temporary storage for a tangent vector related to social_weight
  • swarm: a set of points (of type AbstractVector{P}) on a manifold $\{a_i\}_{i=1}^{N}$

Constructor

ParticleSwarmState(M, initial_swarm, velocity; kawrgs...)

construct a particle swarm solver state for the manifold M starting with the initial population initial_swarm with velocities. The p used in the following defaults is the type of one point from the swarm.

Keyword arguments

See also

particle_swarm

source

Stopping criteria

Manopt.StopWhenSwarmVelocityLessType
StopWhenSwarmVelocityLess <: StoppingCriterion

Stopping criterion for particle_swarm, when the velocity of the swarm is less than a threshold.

Fields

  • threshold: the threshold
  • at_iteration: store the iteration the stopping criterion was (last) fulfilled
  • reason: store the reason why the stopping criterion was fulfilled, see get_reason
  • velocity_norms: interim vector to store the norms of the velocities before computing its norm

Constructor

StopWhenSwarmVelocityLess(tolerance::Float64)

initialize the stopping criterion to a certain tolerance.

source

Technical details

The particle_swarm solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.
  • An inverse_retract!(M, X, p, q); it is recommended to set the default_inverse_retraction_method to a favourite retraction. If this default is set, a inverse_retraction_method= does not have to be specified.
  • A vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= does not have to be specified.
  • By default the stopping criterion uses the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.
  • Tangent vectors storing the social and cognitive vectors are initialized calling zero_vector(M,p).
  • A `copyto!(M, q, p) and copy(M,p) for points.
  • The distance(M, p, q) when using the default stopping criterion, which uses StopWhenChangeLess.

Literature

[BIA10]
P. B. Borckmans, M. Ishteva and P.-A. Absil. A Modified Particle Swarm Optimization Algorithm for the Best Low Multilinear Rank Approximation of Higher-Order Tensors. In: 7th International Conference on Swarm INtelligence (Springer Berlin Heidelberg, 2010); pp. 13–23.
diff --git a/dev/solvers/primal_dual_semismooth_Newton/index.html b/dev/solvers/primal_dual_semismooth_Newton/index.html index 7d23ff45e1..918e06fe47 100644 --- a/dev/solvers/primal_dual_semismooth_Newton/index.html +++ b/dev/solvers/primal_dual_semismooth_Newton/index.html @@ -2,4 +2,4 @@ Primal-dual Riemannian semismooth Newton · Manopt.jl

Primal-dual Riemannian semismooth Newton algorithm

The Primal-dual Riemannian semismooth Newton Algorithm is a second-order method derived from the ChambollePock.

The aim is to solve an optimization problem on a manifold with a cost function of the form

\[F(p) + G(Λ(p)),\]

where $F:\mathcal M → \overline{ℝ}$, $G:\mathcal N → \overline{ℝ}$, and $Λ:\mathcal M →\mathcal N$. If the manifolds $\mathcal M$ or $\mathcal N$ are not Hadamard, it has to be considered locally only, that is on geodesically convex sets $\mathcal C \subset \mathcal M$ and $\mathcal D \subset\mathcal N$ such that $Λ(\mathcal C) \subset \mathcal D$.

The algorithm comes down to applying the Riemannian semismooth Newton method to the rewritten primal-dual optimality conditions. Define the vector field $X: \mathcal{M} \times \mathcal{T}_{n}^{*} \mathcal{N} \rightarrow \mathcal{T} \mathcal{M} \times \mathcal{T}_{n}^{*} \mathcal{N}$ as

\[X\left(p, \xi_{n}\right):=\left(\begin{array}{c} -\log _{p} \operatorname{prox}_{\sigma F}\left(\exp _{p}\left(\mathcal{P}_{p \leftarrow m}\left(-\sigma\left(D_{m} \Lambda\right)^{*}\left[\mathcal{P}_{\Lambda(m) \leftarrow n} \xi_{n}\right]\right)^{\sharp}\right)\right) \\ \xi_{n}-\operatorname{prox}_{\tau G_{n}^{*}}\left(\xi_{n}+\tau\left(\mathcal{P}_{n \leftarrow \Lambda(m)} D_{m} \Lambda\left[\log _{m} p\right]\right)^{\flat}\right) -\end{array}\right)\]

and solve for $X(p,ξ_{n})=0$.

Given base points $m∈\mathcal C$, $n=Λ(m)∈\mathcal D$, initial primal and dual values $p^{(0)} ∈\mathcal C$, $ξ_{n}^{(0)} ∈ \mathcal T_{n}^{*}\mathcal N$, and primal and dual step sizes $\sigma$, $\tau$.

The algorithms performs the steps $k=1,…,$ (until a StoppingCriterion is reached)

  1. Choose any element

    \[V^{(k)} ∈ ∂_C X(p^{(k)},ξ_n^{(k)})\]

    of the Clarke generalized covariant derivative
  2. Solve

    \[V^{(k)} [(d_p^{(k)}, d_n^{(k)})] = - X(p^{(k)},ξ_n^{(k)})\]

    in the vector space $\mathcal{T}_{p^{(k)}} \mathcal{M} \times \mathcal{T}_{n}^{*} \mathcal{N}$
  3. Update

    \[p^{(k+1)} := \exp_{p^{(k)}}(d_p^{(k)})\]

    and

    \[ξ_n^{(k+1)} := ξ_n^{(k)} + d_n^{(k)}\]

Furthermore you can exchange the exponential map, the logarithmic map, and the parallel transport by a retraction, an inverse retraction and a vector transport.

Finally you can also update the base points $m$ and $n$ during the iterations. This introduces a few additional vector transports. The same holds for the case that $Λ(m^{(k)})\neq n^{(k)}$ at some point. All these cases are covered in the algorithm.

Manopt.primal_dual_semismooth_NewtonFunction
primal_dual_semismooth_Newton(M, N, cost, p, X, m, n, prox_F, diff_prox_F, prox_G_dual, diff_prox_dual_G, linearized_operator, adjoint_linearized_operator)

Perform the Primal-Dual Riemannian semismooth Newton algorithm.

Given a cost function $\mathcal E: \mathcal M → \overline{ℝ}$ of the form

\[\mathcal E(p) = F(p) + G( Λ(p) ),\]

where $F: \mathcal M → \overline{ℝ}$, $G: \mathcal N → \overline{ℝ}$, and $Λ: \mathcal M → \mathcal N$. The remaining input parameters are

  • p, X: primal and dual start points $p∈\mathcal M$ and $X ∈ T_n\mathcal N$
  • m,n: base points on $\mathcal M$ and `\mathcal N, respectively.
  • linearized_forward_operator: the linearization $DΛ(⋅)[⋅]$ of the operator $Λ(⋅)$.
  • adjoint_linearized_operator: the adjoint $DΛ^*$ of the linearized operator $DΛ(m): T_{m}\mathcal M → T_{Λ(m)}\mathcal N$
  • prox_F, prox_G_Dual: the proximal maps of $F$ and $G^\ast_n$
  • diff_prox_F, diff_prox_dual_G: the (Clarke Generalized) differentials of the proximal maps of $F$ and $G^\ast_n$

For more details on the algorithm, see [DL21].

Keyword arguments

  • dual_stepsize=1/sqrt(8): proximal parameter of the dual prox
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • Λ=missing: the exact operator, that is required if Λ(m)=n does not hold; missing indicates, that the forward operator is exact.
  • primal_stepsize=1/sqrt(8): proximal parameter of the primal prox
  • reg_param=1e-5: regularisation parameter for the Newton matrix Note that this changes the arguments the forward_operator is called.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(50): a functor indicating that the stopping criterion is fulfilled
  • update_primal_base=missing: function to update m (identity by default/missing)
  • update_dual_base=missing: function to update n (identity by default/missing)
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.primal_dual_semismooth_Newton!Function
primal_dual_semismooth_Newton(M, N, cost, p, X, m, n, prox_F, diff_prox_F, prox_G_dual, diff_prox_dual_G, linearized_operator, adjoint_linearized_operator)

Perform the Primal-Dual Riemannian semismooth Newton algorithm.

Given a cost function $\mathcal E: \mathcal M → \overline{ℝ}$ of the form

\[\mathcal E(p) = F(p) + G( Λ(p) ),\]

where $F: \mathcal M → \overline{ℝ}$, $G: \mathcal N → \overline{ℝ}$, and $Λ: \mathcal M → \mathcal N$. The remaining input parameters are

  • p, X: primal and dual start points $p∈\mathcal M$ and $X ∈ T_n\mathcal N$
  • m,n: base points on $\mathcal M$ and `\mathcal N, respectively.
  • linearized_forward_operator: the linearization $DΛ(⋅)[⋅]$ of the operator $Λ(⋅)$.
  • adjoint_linearized_operator: the adjoint $DΛ^*$ of the linearized operator $DΛ(m): T_{m}\mathcal M → T_{Λ(m)}\mathcal N$
  • prox_F, prox_G_Dual: the proximal maps of $F$ and $G^\ast_n$
  • diff_prox_F, diff_prox_dual_G: the (Clarke Generalized) differentials of the proximal maps of $F$ and $G^\ast_n$

For more details on the algorithm, see [DL21].

Keyword arguments

  • dual_stepsize=1/sqrt(8): proximal parameter of the dual prox
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • Λ=missing: the exact operator, that is required if Λ(m)=n does not hold; missing indicates, that the forward operator is exact.
  • primal_stepsize=1/sqrt(8): proximal parameter of the primal prox
  • reg_param=1e-5: regularisation parameter for the Newton matrix Note that this changes the arguments the forward_operator is called.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(50): a functor indicating that the stopping criterion is fulfilled
  • update_primal_base=missing: function to update m (identity by default/missing)
  • update_dual_base=missing: function to update n (identity by default/missing)
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.PrimalDualSemismoothNewtonStateType
PrimalDualSemismoothNewtonState <: AbstractPrimalDualSolverState

Fields

  • m::P: a point on the manifold $\mathcal M$
  • n::Q: a point on the manifold $\mathcal N$
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • primal_stepsize::Float64: proximal parameter of the primal prox
  • dual_stepsize::Float64: proximal parameter of the dual prox
  • reg_param::Float64: regularisation parameter for the Newton matrix
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • update_primal_base: function to update the primal base
  • update_dual_base: function to update the dual base
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

where for the update functions a AbstractManoptProblem amp, AbstractManoptSolverState ams and the current iterate i are the arguments. If you activate these to be different from the default identity, you have to provide p.Λ for the algorithm to work (which might be missing).

Constructor

PrimalDualSemismoothNewtonState(M::AbstractManifold; kwargs...)

Generate a state for the primal_dual_semismooth_Newton.

Keyword arguments

source

Technical details

The primal_dual_semismooth_Newton solver requires the following functions of a manifold to be available for both the manifold $\mathcal M$and $\mathcal N$

Literature

[DL21]
W. Diepeveen and J. Lellmann. An Inexact Semismooth Newton Method on Riemannian Manifolds with Application to Duality-Based Total Variation Denoising. SIAM Journal on Imaging Sciences 14, 1565–1600 (2021), arXiv:2102.10309.
+\end{array}\right)\]

and solve for $X(p,ξ_{n})=0$.

Given base points $m∈\mathcal C$, $n=Λ(m)∈\mathcal D$, initial primal and dual values $p^{(0)} ∈\mathcal C$, $ξ_{n}^{(0)} ∈ \mathcal T_{n}^{*}\mathcal N$, and primal and dual step sizes $\sigma$, $\tau$.

The algorithms performs the steps $k=1,…,$ (until a StoppingCriterion is reached)

  1. Choose any element

    \[V^{(k)} ∈ ∂_C X(p^{(k)},ξ_n^{(k)})\]

    of the Clarke generalized covariant derivative
  2. Solve

    \[V^{(k)} [(d_p^{(k)}, d_n^{(k)})] = - X(p^{(k)},ξ_n^{(k)})\]

    in the vector space $\mathcal{T}_{p^{(k)}} \mathcal{M} \times \mathcal{T}_{n}^{*} \mathcal{N}$
  3. Update

    \[p^{(k+1)} := \exp_{p^{(k)}}(d_p^{(k)})\]

    and

    \[ξ_n^{(k+1)} := ξ_n^{(k)} + d_n^{(k)}\]

Furthermore you can exchange the exponential map, the logarithmic map, and the parallel transport by a retraction, an inverse retraction and a vector transport.

Finally you can also update the base points $m$ and $n$ during the iterations. This introduces a few additional vector transports. The same holds for the case that $Λ(m^{(k)})\neq n^{(k)}$ at some point. All these cases are covered in the algorithm.

Manopt.primal_dual_semismooth_NewtonFunction
primal_dual_semismooth_Newton(M, N, cost, p, X, m, n, prox_F, diff_prox_F, prox_G_dual, diff_prox_dual_G, linearized_operator, adjoint_linearized_operator)

Perform the Primal-Dual Riemannian semismooth Newton algorithm.

Given a cost function $\mathcal E: \mathcal M → \overline{ℝ}$ of the form

\[\mathcal E(p) = F(p) + G( Λ(p) ),\]

where $F: \mathcal M → \overline{ℝ}$, $G: \mathcal N → \overline{ℝ}$, and $Λ: \mathcal M → \mathcal N$. The remaining input parameters are

  • p, X: primal and dual start points $p∈\mathcal M$ and $X ∈ T_n\mathcal N$
  • m,n: base points on $\mathcal M$ and `\mathcal N, respectively.
  • linearized_forward_operator: the linearization $DΛ(⋅)[⋅]$ of the operator $Λ(⋅)$.
  • adjoint_linearized_operator: the adjoint $DΛ^*$ of the linearized operator $DΛ(m): T_{m}\mathcal M → T_{Λ(m)}\mathcal N$
  • prox_F, prox_G_Dual: the proximal maps of $F$ and $G^\ast_n$
  • diff_prox_F, diff_prox_dual_G: the (Clarke Generalized) differentials of the proximal maps of $F$ and $G^\ast_n$

For more details on the algorithm, see [DL21].

Keyword arguments

  • dual_stepsize=1/sqrt(8): proximal parameter of the dual prox
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • Λ=missing: the exact operator, that is required if Λ(m)=n does not hold; missing indicates, that the forward operator is exact.
  • primal_stepsize=1/sqrt(8): proximal parameter of the primal prox
  • reg_param=1e-5: regularisation parameter for the Newton matrix Note that this changes the arguments the forward_operator is called.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(50): a functor indicating that the stopping criterion is fulfilled
  • update_primal_base=missing: function to update m (identity by default/missing)
  • update_dual_base=missing: function to update n (identity by default/missing)
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.primal_dual_semismooth_Newton!Function
primal_dual_semismooth_Newton(M, N, cost, p, X, m, n, prox_F, diff_prox_F, prox_G_dual, diff_prox_dual_G, linearized_operator, adjoint_linearized_operator)

Perform the Primal-Dual Riemannian semismooth Newton algorithm.

Given a cost function $\mathcal E: \mathcal M → \overline{ℝ}$ of the form

\[\mathcal E(p) = F(p) + G( Λ(p) ),\]

where $F: \mathcal M → \overline{ℝ}$, $G: \mathcal N → \overline{ℝ}$, and $Λ: \mathcal M → \mathcal N$. The remaining input parameters are

  • p, X: primal and dual start points $p∈\mathcal M$ and $X ∈ T_n\mathcal N$
  • m,n: base points on $\mathcal M$ and `\mathcal N, respectively.
  • linearized_forward_operator: the linearization $DΛ(⋅)[⋅]$ of the operator $Λ(⋅)$.
  • adjoint_linearized_operator: the adjoint $DΛ^*$ of the linearized operator $DΛ(m): T_{m}\mathcal M → T_{Λ(m)}\mathcal N$
  • prox_F, prox_G_Dual: the proximal maps of $F$ and $G^\ast_n$
  • diff_prox_F, diff_prox_dual_G: the (Clarke Generalized) differentials of the proximal maps of $F$ and $G^\ast_n$

For more details on the algorithm, see [DL21].

Keyword arguments

  • dual_stepsize=1/sqrt(8): proximal parameter of the dual prox
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • inverse_retraction_method=default_inverse_retraction_method(M, typeof(p)): an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • Λ=missing: the exact operator, that is required if Λ(m)=n does not hold; missing indicates, that the forward operator is exact.
  • primal_stepsize=1/sqrt(8): proximal parameter of the primal prox
  • reg_param=1e-5: regularisation parameter for the Newton matrix Note that this changes the arguments the forward_operator is called.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(50): a functor indicating that the stopping criterion is fulfilled
  • update_primal_base=missing: function to update m (identity by default/missing)
  • update_dual_base=missing: function to update n (identity by default/missing)
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.PrimalDualSemismoothNewtonStateType
PrimalDualSemismoothNewtonState <: AbstractPrimalDualSolverState

Fields

  • m::P: a point on the manifold $\mathcal M$
  • n::Q: a point on the manifold $\mathcal N$
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • primal_stepsize::Float64: proximal parameter of the primal prox
  • dual_stepsize::Float64: proximal parameter of the dual prox
  • reg_param::Float64: regularisation parameter for the Newton matrix
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • update_primal_base: function to update the primal base
  • update_dual_base: function to update the dual base
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

where for the update functions a AbstractManoptProblem amp, AbstractManoptSolverState ams and the current iterate i are the arguments. If you activate these to be different from the default identity, you have to provide p.Λ for the algorithm to work (which might be missing).

Constructor

PrimalDualSemismoothNewtonState(M::AbstractManifold; kwargs...)

Generate a state for the primal_dual_semismooth_Newton.

Keyword arguments

source

Technical details

The primal_dual_semismooth_Newton solver requires the following functions of a manifold to be available for both the manifold $\mathcal M$and $\mathcal N$

Literature

[DL21]
W. Diepeveen and J. Lellmann. An Inexact Semismooth Newton Method on Riemannian Manifolds with Application to Duality-Based Total Variation Denoising. SIAM Journal on Imaging Sciences 14, 1565–1600 (2021), arXiv:2102.10309.
diff --git a/dev/solvers/proximal_bundle_method/index.html b/dev/solvers/proximal_bundle_method/index.html index 0e930ac5bf..95518bfe0a 100644 --- a/dev/solvers/proximal_bundle_method/index.html +++ b/dev/solvers/proximal_bundle_method/index.html @@ -1,8 +1,8 @@ Proximal bundle method · Manopt.jl

Proximal bundle method

Manopt.proximal_bundle_methodFunction
proximal_bundle_method(M, f, ∂f, p=rand(M), kwargs...)
-proximal_bundle_method!(M, f, ∂f, p, kwargs...)

perform a proximal bundle method $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(-d_k)$, where $\operatorname{retr}$ is a retraction and

\[d_k = \frac{1}{\mu_k} \sum_{j\in J_k} λ_j^k \mathrm{P}_{p_k←q_j}X_{q_j},\]

with $X_{q_j} ∈ ∂f(q_j)$, $p_k$ the last serious iterate, $\mu_k$ a proximal parameter, and the $λ_j^k$ as solutions to the quadratic subproblem provided by the sub solver, see for example the proximal_bundle_method_subsolver.

Though the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.

For more details see [HNP23].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • ∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.proximal_bundle_method!Function
proximal_bundle_method(M, f, ∂f, p=rand(M), kwargs...)
-proximal_bundle_method!(M, f, ∂f, p, kwargs...)

perform a proximal bundle method $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(-d_k)$, where $\operatorname{retr}$ is a retraction and

\[d_k = \frac{1}{\mu_k} \sum_{j\in J_k} λ_j^k \mathrm{P}_{p_k←q_j}X_{q_j},\]

with $X_{q_j} ∈ ∂f(q_j)$, $p_k$ the last serious iterate, $\mu_k$ a proximal parameter, and the $λ_j^k$ as solutions to the quadratic subproblem provided by the sub solver, see for example the proximal_bundle_method_subsolver.

Though the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.

For more details see [HNP23].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • ∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ProximalBundleMethodStateType
ProximalBundleMethodState <: AbstractManoptSolverState

stores option values for a proximal_bundle_method solver.

Fields

  • α: curvature-dependent parameter used to update η
  • α₀: initialization value for α, used to update η
  • approx_errors: approximation of the linearization errors at the last serious step
  • bundle: bundle that collects each iterate with the computed subgradient at the iterate
  • bundle_size: the maximal size of the bundle
  • c: convex combination of the approximation errors
  • d: descent direction
  • δ: parameter for updating μ: if $δ < 0$ then $μ = \log(i + 1)$, else $μ += δ μ$
  • ε: stepsize-like parameter related to the injectivity radius of the manifold
  • η: curvature-dependent term for updating the approximation errors
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • λ: convex coefficients that solve the subproblem
  • m: the parameter to test the decrease of the cost
  • μ: (initial) proximal parameter for the subproblem
  • ν: the stopping parameter given by $ν = - μ |d|^2 - c$
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • p_last_serious: last serious iterate
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • transported_subgradients: subgradients of the bundle that are transported to p_last_serious
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing a subgradient at the current iterate
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Constructor

ProximalBundleMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)
-ProximalBundleMethodState(M::AbstractManifold, sub_problem=proximal_bundle_method_subsolver; evaluation=AllocatingEvaluation(), kwargs...)

Generate the state for the proximal_bundle_method on the manifold M

Keyword arguments

source

Helpers and internal functions

Manopt.proximal_bundle_method_subsolverFunction
λ = proximal_bundle_method_subsolver(M, p_last_serious, μ, approximation_errors, transported_subgradients)
+proximal_bundle_method!(M, f, ∂f, p, kwargs...)

perform a proximal bundle method $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(-d_k)$, where $\operatorname{retr}$ is a retraction and

\[d_k = \frac{1}{\mu_k} \sum_{j\in J_k} λ_j^k \mathrm{P}_{p_k←q_j}X_{q_j},\]

with $X_{q_j} ∈ ∂f(q_j)$, $p_k$ the last serious iterate, $\mu_k$ a proximal parameter, and the $λ_j^k$ as solutions to the quadratic subproblem provided by the sub solver, see for example the proximal_bundle_method_subsolver.

Though the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.

For more details see [HNP23].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • ∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.proximal_bundle_method!Function
proximal_bundle_method(M, f, ∂f, p=rand(M), kwargs...)
+proximal_bundle_method!(M, f, ∂f, p, kwargs...)

perform a proximal bundle method $p^{(k+1)} = \operatorname{retr}_{p^{(k)}}(-d_k)$, where $\operatorname{retr}$ is a retraction and

\[d_k = \frac{1}{\mu_k} \sum_{j\in J_k} λ_j^k \mathrm{P}_{p_k←q_j}X_{q_j},\]

with $X_{q_j} ∈ ∂f(q_j)$, $p_k$ the last serious iterate, $\mu_k$ a proximal parameter, and the $λ_j^k$ as solutions to the quadratic subproblem provided by the sub solver, see for example the proximal_bundle_method_subsolver.

Though the subdifferential might be set valued, the argument ∂f should always return one element from the subdifferential, but not necessarily deterministic.

For more details see [HNP23].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • ∂f: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • p: a point on the manifold $\mathcal M$

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ProximalBundleMethodStateType
ProximalBundleMethodState <: AbstractManoptSolverState

stores option values for a proximal_bundle_method solver.

Fields

  • α: curvature-dependent parameter used to update η
  • α₀: initialization value for α, used to update η
  • approx_errors: approximation of the linearization errors at the last serious step
  • bundle: bundle that collects each iterate with the computed subgradient at the iterate
  • bundle_size: the maximal size of the bundle
  • c: convex combination of the approximation errors
  • d: descent direction
  • δ: parameter for updating μ: if $δ < 0$ then $μ = \log(i + 1)$, else $μ += δ μ$
  • ε: stepsize-like parameter related to the injectivity radius of the manifold
  • η: curvature-dependent term for updating the approximation errors
  • inverse_retraction_method::AbstractInverseRetractionMethod: an inverse retraction $\operatorname{retr}^{-1}$ to use, see the section on retractions and their inverses
  • λ: convex coefficients that solve the subproblem
  • m: the parameter to test the decrease of the cost
  • μ: (initial) proximal parameter for the subproblem
  • ν: the stopping parameter given by $ν = - μ |d|^2 - c$
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • p_last_serious: last serious iterate
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • transported_subgradients: subgradients of the bundle that are transported to p_last_serious
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing a subgradient at the current iterate
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Constructor

ProximalBundleMethodState(M::AbstractManifold, sub_problem, sub_state; kwargs...)
+ProximalBundleMethodState(M::AbstractManifold, sub_problem=proximal_bundle_method_subsolver; evaluation=AllocatingEvaluation(), kwargs...)

Generate the state for the proximal_bundle_method on the manifold M

Keyword arguments

source

Helpers and internal functions

Manopt.proximal_bundle_method_subsolverFunction
λ = proximal_bundle_method_subsolver(M, p_last_serious, μ, approximation_errors, transported_subgradients)
 proximal_bundle_method_subsolver!(M, λ, p_last_serious, μ, approximation_errors, transported_subgradients)

solver for the subproblem of the proximal bundle method.

The subproblem for the proximal bundle method is

\[\begin{align*} \operatorname*{arg\,min}_{λ ∈ ℝ^{\lvert L_l\rvert}} & \frac{1}{2 \mu_l} \Bigl\lVert \sum_{j ∈ L_l} λ_j \mathrm{P}_{p_k←q_j} X_{q_j} \Bigr\rVert^2 @@ -12,4 +12,4 @@ \sum_{j ∈ L_l} λ_j = 1, \quad λ_j ≥ 0 \quad \text{for all } j ∈ L_l, -\end{align*}\]

where $L_l = \{k\}$ if $q_k$ is a serious iterate, and $L_l = L_{l-1} \cup \{k\}$ otherwise. See [HNP23].

Tip

A default subsolver based on RipQP.jl and QuadraticModels is available if these two packages are loaded.

source

Literature

[HNP23]
N. Hoseini Monjezi, S. Nobakhtian and M. R. Pouryayevali. A proximal bundle algorithm for nonsmooth optimization on Riemannian manifolds. IMA Journal of Numerical Analysis 43, 293–325 (2023).
+\end{align*}\]

where $L_l = \{k\}$ if $q_k$ is a serious iterate, and $L_l = L_{l-1} \cup \{k\}$ otherwise. See [HNP23].

Tip

A default subsolver based on RipQP.jl and QuadraticModels is available if these two packages are loaded.

source

Literature

[HNP23]
N. Hoseini Monjezi, S. Nobakhtian and M. R. Pouryayevali. A proximal bundle algorithm for nonsmooth optimization on Riemannian manifolds. IMA Journal of Numerical Analysis 43, 293–325 (2023).
diff --git a/dev/solvers/proximal_point/index.html b/dev/solvers/proximal_point/index.html index 6c30db2b00..44c22a950a 100644 --- a/dev/solvers/proximal_point/index.html +++ b/dev/solvers/proximal_point/index.html @@ -2,7 +2,7 @@ Proximal point method · Manopt.jl

Proximal point method

Manopt.proximal_pointFunction
proximal_point(M, prox_f, p=rand(M); kwargs...)
 proximal_point(M, mpmo, p=rand(M); kwargs...)
 proximal_point!(M, prox_f, p; kwargs...)
-proximal_point!(M, mpmo, p; kwargs...)

Perform the proximal point algoritm from [FO02] which reads

\[p^{(k+1)} = \operatorname{prox}_{λ_kf}(p^{(k)})\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • prox_f: a proximal map (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of $f$ (see evaluation)

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • f=nothing: a cost function $f: \mathcal M→ℝ$ to minimize. For running the algorithm, $f$ is not required, but for example when recording the cost or using a stopping criterion that requires a cost function.
  • λ= k -> 1.0: a function returning the (square summable but not summable) sequence of $λ_i$
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.proximal_point!Function
proximal_point(M, prox_f, p=rand(M); kwargs...)
+proximal_point!(M, mpmo, p; kwargs...)

Perform the proximal point algoritm from [FO02] which reads

\[p^{(k+1)} = \operatorname{prox}_{λ_kf}(p^{(k)})\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • prox_f: a proximal map (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of $f$ (see evaluation)

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • f=nothing: a cost function $f: \mathcal M→ℝ$ to minimize. For running the algorithm, $f$ is not required, but for example when recording the cost or using a stopping criterion that requires a cost function.
  • λ= k -> 1.0: a function returning the (square summable but not summable) sequence of $λ_i$
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.proximal_point!Function
proximal_point(M, prox_f, p=rand(M); kwargs...)
 proximal_point(M, mpmo, p=rand(M); kwargs...)
 proximal_point!(M, prox_f, p; kwargs...)
-proximal_point!(M, mpmo, p; kwargs...)

Perform the proximal point algoritm from [FO02] which reads

\[p^{(k+1)} = \operatorname{prox}_{λ_kf}(p^{(k)})\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • prox_f: a proximal map (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of $f$ (see evaluation)

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • f=nothing: a cost function $f: \mathcal M→ℝ$ to minimize. For running the algorithm, $f$ is not required, but for example when recording the cost or using a stopping criterion that requires a cost function.
  • λ= k -> 1.0: a function returning the (square summable but not summable) sequence of $λ_i$
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ProximalPointStateType
ProximalPointState{P} <: AbstractGradientSolverState

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • λ: a function for the values of $λ_k$ per iteration(cycle $k$

Constructor

ProximalPointState(M::AbstractManifold; kwargs...)

Initialize the proximal point method solver state, where

Input

Keyword arguments

  • λ=k -> 1.0 a function to compute the $λ_k, k ∈ \mathcal N$,
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • stopping_criterion=StopAfterIteration(100): a functor indicating that the stopping criterion is fulfilled

See also

proximal_point

source
[FO02]
O. Ferreira and P. R. Oliveira. Proximal point algorithm on Riemannian manifolds. Optimization. A Journal of Mathematical Programming and Operations Research 51, 257–270 (2002).
+proximal_point!(M, mpmo, p; kwargs...)

Perform the proximal point algoritm from [FO02] which reads

\[p^{(k+1)} = \operatorname{prox}_{λ_kf}(p^{(k)})\]

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • prox_f: a proximal map (M,λ,p) -> q or (M, q, λ, p) -> q for the summands of $f$ (see evaluation)

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • f=nothing: a cost function $f: \mathcal M→ℝ$ to minimize. For running the algorithm, $f$ is not required, but for example when recording the cost or using a stopping criterion that requires a cost function.
  • λ= k -> 1.0: a function returning the (square summable but not summable) sequence of $λ_i$
  • stopping_criterion=StopAfterIteration(200)|StopWhenChangeLess(1e-12)): a functor indicating that the stopping criterion is fulfilled

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.ProximalPointStateType
ProximalPointState{P} <: AbstractGradientSolverState

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • λ: a function for the values of $λ_k$ per iteration(cycle $k$

Constructor

ProximalPointState(M::AbstractManifold; kwargs...)

Initialize the proximal point method solver state, where

Input

Keyword arguments

  • λ=k -> 1.0 a function to compute the $λ_k, k ∈ \mathcal N$,
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • stopping_criterion=StopAfterIteration(100): a functor indicating that the stopping criterion is fulfilled

See also

proximal_point

source
[FO02]
O. Ferreira and P. R. Oliveira. Proximal point algorithm on Riemannian manifolds. Optimization. A Journal of Mathematical Programming and Operations Research 51, 257–270 (2002).
diff --git a/dev/solvers/quasi_Newton/index.html b/dev/solvers/quasi_Newton/index.html index 69b2f0ecb6..d4347eddd2 100644 --- a/dev/solvers/quasi_Newton/index.html +++ b/dev/solvers/quasi_Newton/index.html @@ -1,13 +1,13 @@ Quasi-Newton · Manopt.jl

Riemannian quasi-Newton methods

Manopt.quasi_NewtonFunction
quasi_Newton(M, f, grad_f, p; kwargs...)
-quasi_Newton!(M, f, grad_f, p; kwargs...)

Perform a quasi Newton iteration to solve

\[\operatorname{arg\,min}_{p ∈ \mathcal M} f(p)\]

with start point p. The iterations can be done in-place of p$=p^{(0)}$. The $k$th iteration consists of

  1. Compute the search direction $η^{(k)} = -\mathcal B_k [\operatorname{grad}f (p^{(k)})]$ or solve $\mathcal H_k [η^{(k)}] = -\operatorname{grad}f (p^{(k)})]$.
  2. Determine a suitable stepsize $α_k$ along the curve $γ(α) = R_{p^{(k)}}(α η^{(k)})$, usually by using WolfePowellLinesearch.
  3. Compute $p^{(k+1)} = R_{p^{(k)}}(α_k η^{(k)})$.
  4. Define $s_k = \mathcal T_{p^{(k)}, α_k η^{(k)}}(α_k η^{(k)})$ and $y_k = \operatorname{grad}f(p^{(k+1)}) - \mathcal T_{p^{(k)}, α_k η^{(k)}}(\operatorname{grad}f(p^{(k)}))$, where $\mathcal T$ denotes a vector transport.
  5. Compute the new approximate Hessian $H_{k+1}$ or its inverse $B_{k+1}$.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • basis=DefaultOrthonormalBasis(): basis to use within each of the the tangent spaces to represent the Hessian (inverse) for the cases where it is stored in full (matrix) form.
  • cautious_update=false: whether or not to use the QuasiNewtonCautiousDirectionUpdate which wraps the direction_upate.
  • cautious_function=(x) -> x * 1e-4: a monotone increasing function for the cautious update that is zero at $x=0$ and strictly increasing at $0$
  • direction_update=InverseBFGS(): the AbstractQuasiNewtonUpdateRule to use.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.
  • initial_operator= initial_scale*Matrix{Float64}(I, n, n): initial matrix to use in case the Hessian (inverse) approximation is stored as a full matrix, that is n=manifold_dimension(M). This matrix is only allocated for the full matrix case. See also initial_scale.
  • initial_scale=1.0: scale initial s to use in with $\frac{s⟨s_k,y_k⟩_{p_k}}{\lVert y_k\rVert_{p_k}}$ in the computation of the limited memory approach. see also initial_operator
  • memory_size=20: limited memory, number of $s_k, y_k$ to store. Set to a negative value to use a full memory (matrix) representation
  • nondescent_direction_behavior=:reinitialize_direction_update: specify how non-descent direction is handled. This can be
    • :step_towards_negative_gradient: the direction is replaced with negative gradient, a message is stored.
    • :ignore: the verification is not performed, so any computed direction is accepted. No message is stored.
    • :reinitialize_direction_update: discards operator state stored in direction update rules.
    • any other value performs the verification, keeps the direction but stores a message.
    A stored message can be displayed using DebugMessages.
  • project!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=WolfePowellLinesearch(retraction_method, vector_transport_method): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(max(1000, memory_size))|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.quasi_Newton!Function
quasi_Newton(M, f, grad_f, p; kwargs...)
-quasi_Newton!(M, f, grad_f, p; kwargs...)

Perform a quasi Newton iteration to solve

\[\operatorname{arg\,min}_{p ∈ \mathcal M} f(p)\]

with start point p. The iterations can be done in-place of p$=p^{(0)}$. The $k$th iteration consists of

  1. Compute the search direction $η^{(k)} = -\mathcal B_k [\operatorname{grad}f (p^{(k)})]$ or solve $\mathcal H_k [η^{(k)}] = -\operatorname{grad}f (p^{(k)})]$.
  2. Determine a suitable stepsize $α_k$ along the curve $γ(α) = R_{p^{(k)}}(α η^{(k)})$, usually by using WolfePowellLinesearch.
  3. Compute $p^{(k+1)} = R_{p^{(k)}}(α_k η^{(k)})$.
  4. Define $s_k = \mathcal T_{p^{(k)}, α_k η^{(k)}}(α_k η^{(k)})$ and $y_k = \operatorname{grad}f(p^{(k+1)}) - \mathcal T_{p^{(k)}, α_k η^{(k)}}(\operatorname{grad}f(p^{(k)}))$, where $\mathcal T$ denotes a vector transport.
  5. Compute the new approximate Hessian $H_{k+1}$ or its inverse $B_{k+1}$.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • basis=DefaultOrthonormalBasis(): basis to use within each of the the tangent spaces to represent the Hessian (inverse) for the cases where it is stored in full (matrix) form.
  • cautious_update=false: whether or not to use the QuasiNewtonCautiousDirectionUpdate which wraps the direction_upate.
  • cautious_function=(x) -> x * 1e-4: a monotone increasing function for the cautious update that is zero at $x=0$ and strictly increasing at $0$
  • direction_update=InverseBFGS(): the AbstractQuasiNewtonUpdateRule to use.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.
  • initial_operator= initial_scale*Matrix{Float64}(I, n, n): initial matrix to use in case the Hessian (inverse) approximation is stored as a full matrix, that is n=manifold_dimension(M). This matrix is only allocated for the full matrix case. See also initial_scale.
  • initial_scale=1.0: scale initial s to use in with $\frac{s⟨s_k,y_k⟩_{p_k}}{\lVert y_k\rVert_{p_k}}$ in the computation of the limited memory approach. see also initial_operator
  • memory_size=20: limited memory, number of $s_k, y_k$ to store. Set to a negative value to use a full memory (matrix) representation
  • nondescent_direction_behavior=:reinitialize_direction_update: specify how non-descent direction is handled. This can be
    • :step_towards_negative_gradient: the direction is replaced with negative gradient, a message is stored.
    • :ignore: the verification is not performed, so any computed direction is accepted. No message is stored.
    • :reinitialize_direction_update: discards operator state stored in direction update rules.
    • any other value performs the verification, keeps the direction but stores a message.
    A stored message can be displayed using DebugMessages.
  • project!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=WolfePowellLinesearch(retraction_method, vector_transport_method): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(max(1000, memory_size))|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

Background

The aim is to minimize a real-valued function on a Riemannian manifold, that is

\[\min f(x), \quad x ∈ \mathcal{M}.\]

Riemannian quasi-Newtonian methods are as generalizations of their Euclidean counterparts Riemannian line search methods. These methods determine a search direction $η_k ∈ T_{x_k} \mathcal{M}$ at the current iterate $x_k$ and a suitable stepsize $α_k$ along $\gamma(α) = R_{x_k}(α η_k)$, where $R: T \mathcal{M} →\mathcal{M}$ is a retraction. The next iterate is obtained by

\[x_{k+1} = R_{x_k}(α_k η_k).\]

In quasi-Newton methods, the search direction is given by

\[η_k = -{\mathcal{H}_k}^{-1}[\operatorname{grad}f (x_k)] = -\mathcal{B}_k [\operatorname{grad} (x_k)],\]

where $\mathcal{H}_k : T_{x_k} \mathcal{M} →T_{x_k} \mathcal{M}$ is a positive definite self-adjoint operator, which approximates the action of the Hessian $\operatorname{Hess} f (x_k)[⋅]$ and $\mathcal{B}_k = {\mathcal{H}_k}^{-1}$. The idea of quasi-Newton methods is instead of creating a complete new approximation of the Hessian operator $\operatorname{Hess} f(x_{k+1})$ or its inverse at every iteration, the previous operator $\mathcal{H}_k$ or $\mathcal{B}_k$ is updated by a convenient formula using the obtained information about the curvature of the objective function during the iteration. The resulting operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ acts on the tangent space $T_{x_{k+1}} \mathcal{M}$ of the freshly computed iterate $x_{k+1}$. In order to get a well-defined method, the following requirements are placed on the new operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ that is created by an update. Since the Hessian $\operatorname{Hess} f(x_{k+1})$ is a self-adjoint operator on the tangent space $T_{x_{k+1}} \mathcal{M}$, and $\mathcal{H}_{k+1}$ approximates it, one requirement is, that $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ is also self-adjoint on $T_{x_{k+1}} \mathcal{M}$. In order to achieve a steady descent, the next requirement is that $η_k$ is a descent direction in each iteration. Hence a further requirement is that $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ is a positive definite operator on $T_{x_{k+1}} \mathcal{M}$. In order to get information about the curvature of the objective function into the new operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$, the last requirement is a form of a Riemannian quasi-Newton equation:

\[\mathcal{H}_{k+1} [T_{x_k \rightarrow x_{k+1}}({R_{x_k}}^{-1}(x_{k+1}))] = \operatorname{grad}(x_{k+1}) - T_{x_k \rightarrow x_{k+1}}(\operatorname{grad}f(x_k))\]

or

\[\mathcal{B}_{k+1} [\operatorname{grad}f(x_{k+1}) - T_{x_k \rightarrow x_{k+1}}(\operatorname{grad}f(x_k))] = T_{x_k \rightarrow x_{k+1}}({R_{x_k}}^{-1}(x_{k+1}))\]

where $T_{x_k \rightarrow x_{k+1}} : T_{x_k} \mathcal{M} →T_{x_{k+1}} \mathcal{M}$ and the chosen retraction $R$ is the associated retraction of $T$. Note that, of course, not all updates in all situations meet these conditions in every iteration. For specific quasi-Newton updates, the fulfilment of the Riemannian curvature condition, which requires that

\[g_{x_{k+1}}(s_k, y_k) > 0\]

holds, is a requirement for the inheritance of the self-adjointness and positive definiteness of the $\mathcal{H}_k$ or $\mathcal{B}_k$ to the operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$. Unfortunately, the fulfilment of the Riemannian curvature condition is not given by a step size $\alpha_k > 0$ that satisfies the generalized Wolfe conditions. However, to create a positive definite operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ in each iteration, the so-called locking condition was introduced in [HGA15], which requires that the isometric vector transport $T^S$, which is used in the update formula, and its associate retraction $R$ fulfil

\[T^{S}{x, ξ_x}(ξ_x) = β T^{R}{x, ξ_x}(ξ_x), \quad β = \frac{\lVert ξ_x \rVert_x}{\lVert T^{R}{x, ξ_x}(ξ_x) \rVert_{R_{x}(ξ_x)}},\]

where $T^R$ is the vector transport by differentiated retraction. With the requirement that the isometric vector transport $T^S$ and its associated retraction $R$ satisfies the locking condition and using the tangent vector

\[y_k = {β_k}^{-1} \operatorname{grad}f(x_{k+1}) - T^{S}{x_k, α_k η_k}(\operatorname{grad}f(x_k)),\]

where

\[β_k = \frac{\lVert α_k η_k \rVert_{x_k}}{\lVert T^{R}{x_k, α_k η_k}(α_k η_k) \rVert_{x_{k+1}}},\]

in the update, it can be shown that choosing a stepsize $α_k > 0$ that satisfies the Riemannian Wolfe conditions leads to the fulfilment of the Riemannian curvature condition, which in turn implies that the operator generated by the updates is positive definite. In the following the specific operators are denoted in matrix notation and hence use $H_k$ and $B_k$, respectively.

Direction updates

In general there are different ways to compute a fixed AbstractQuasiNewtonUpdateRule. In general these are represented by

Manopt.QuasiNewtonMatrixDirectionUpdateType
QuasiNewtonMatrixDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate

The QuasiNewtonMatrixDirectionUpdate represent a quasi-Newton update rule, where the operator is stored as a matrix. A distinction is made between the update of the approximation of the Hessian, $H_k \mapsto H_{k+1}$, and the update of the approximation of the Hessian inverse, $B_k \mapsto B_{k+1}$. For the first case, the coordinates of the search direction $η_k$ with respect to a basis $\{b_i\}_{i=1}^{n}$ are determined by solving a linear system of equations

\[\text{Solve} \quad \hat{η_k} = - H_k \widehat{\operatorname{grad}f(x_k)},\]

where $H_k$ is the matrix representing the operator with respect to the basis $\{b_i\}_{i=1}^{n}$ and $\widehat{\operatorname{grad}} f(p_k)}$ represents the coordinates of the gradient of the objective function $f$ in $x_k$ with respect to the basis $\{b_i\}_{i=1}^{n}$. If a method is chosen where Hessian inverse is approximated, the coordinates of the search direction $η_k$ with respect to a basis $\{b_i\}_{i=1}^{n}$ are obtained simply by matrix-vector multiplication

\[\hat{η_k} = - B_k \widehat{\operatorname{grad}f(x_k)},\]

where $B_k$ is the matrix representing the operator with respect to the basis $\{b_i\}_{i=1}^{n}$ and \widehat{\operatorname{grad}} f(p_k)}. In the end, the search directionη_kis generated from the coordinates\hat{eta_k}and the vectors of the basis\{b_i\}_{i=1}^{n}in both variants. The [AbstractQuasiNewtonUpdateRule](@ref) indicates which quasi-Newton update rule is used. In all of them, the Euclidean update formula is used to generate the matrixH_{k+1}andB_{k+1}, and the basis\{b_i\}_{i=1}^{n}is transported into the upcoming tangent spaceT_{p_{k+1}} \mathcal M`, preferably with an isometric vector transport, or generated there.

Provided functors

  • (mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction
  • (η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η

Fields

  • basis: an AbstractBasis to use in the tangent spaces
  • matrix: the matrix which represents the approximating operator.
  • initial_scale: when initialising the update, a unit matrix is used as initial approximation, scaled by this factor
  • update: a AbstractQuasiNewtonUpdateRule.
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Constructor

QuasiNewtonMatrixDirectionUpdate(
+quasi_Newton!(M, f, grad_f, p; kwargs...)

Perform a quasi Newton iteration to solve

\[\operatorname{arg\,min}_{p ∈ \mathcal M} f(p)\]

with start point p. The iterations can be done in-place of p$=p^{(0)}$. The $k$th iteration consists of

  1. Compute the search direction $η^{(k)} = -\mathcal B_k [\operatorname{grad}f (p^{(k)})]$ or solve $\mathcal H_k [η^{(k)}] = -\operatorname{grad}f (p^{(k)})]$.
  2. Determine a suitable stepsize $α_k$ along the curve $γ(α) = R_{p^{(k)}}(α η^{(k)})$, usually by using WolfePowellLinesearch.
  3. Compute $p^{(k+1)} = R_{p^{(k)}}(α_k η^{(k)})$.
  4. Define $s_k = \mathcal T_{p^{(k)}, α_k η^{(k)}}(α_k η^{(k)})$ and $y_k = \operatorname{grad}f(p^{(k+1)}) - \mathcal T_{p^{(k)}, α_k η^{(k)}}(\operatorname{grad}f(p^{(k)}))$, where $\mathcal T$ denotes a vector transport.
  5. Compute the new approximate Hessian $H_{k+1}$ or its inverse $B_{k+1}$.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • basis=DefaultOrthonormalBasis(): basis to use within each of the the tangent spaces to represent the Hessian (inverse) for the cases where it is stored in full (matrix) form.
  • cautious_update=false: whether or not to use the QuasiNewtonCautiousDirectionUpdate which wraps the direction_upate.
  • cautious_function=(x) -> x * 1e-4: a monotone increasing function for the cautious update that is zero at $x=0$ and strictly increasing at $0$
  • direction_update=InverseBFGS(): the AbstractQuasiNewtonUpdateRule to use.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.
  • initial_operator= initial_scale*Matrix{Float64}(I, n, n): initial matrix to use in case the Hessian (inverse) approximation is stored as a full matrix, that is n=manifold_dimension(M). This matrix is only allocated for the full matrix case. See also initial_scale.
  • initial_scale=1.0: scale initial s to use in with $\frac{s⟨s_k,y_k⟩_{p_k}}{\lVert y_k\rVert_{p_k}}$ in the computation of the limited memory approach. see also initial_operator
  • memory_size=20: limited memory, number of $s_k, y_k$ to store. Set to a negative value to use a full memory (matrix) representation
  • nondescent_direction_behavior=:reinitialize_direction_update: specify how non-descent direction is handled. This can be
    • :step_towards_negative_gradient: the direction is replaced with negative gradient, a message is stored.
    • :ignore: the verification is not performed, so any computed direction is accepted. No message is stored.
    • :reinitialize_direction_update: discards operator state stored in direction update rules.
    • any other value performs the verification, keeps the direction but stores a message.
    A stored message can be displayed using DebugMessages.
  • project!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=WolfePowellLinesearch(retraction_method, vector_transport_method): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(max(1000, memory_size))|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.quasi_Newton!Function
quasi_Newton(M, f, grad_f, p; kwargs...)
+quasi_Newton!(M, f, grad_f, p; kwargs...)

Perform a quasi Newton iteration to solve

\[\operatorname{arg\,min}_{p ∈ \mathcal M} f(p)\]

with start point p. The iterations can be done in-place of p$=p^{(0)}$. The $k$th iteration consists of

  1. Compute the search direction $η^{(k)} = -\mathcal B_k [\operatorname{grad}f (p^{(k)})]$ or solve $\mathcal H_k [η^{(k)}] = -\operatorname{grad}f (p^{(k)})]$.
  2. Determine a suitable stepsize $α_k$ along the curve $γ(α) = R_{p^{(k)}}(α η^{(k)})$, usually by using WolfePowellLinesearch.
  3. Compute $p^{(k+1)} = R_{p^{(k)}}(α_k η^{(k)})$.
  4. Define $s_k = \mathcal T_{p^{(k)}, α_k η^{(k)}}(α_k η^{(k)})$ and $y_k = \operatorname{grad}f(p^{(k+1)}) - \mathcal T_{p^{(k)}, α_k η^{(k)}}(\operatorname{grad}f(p^{(k)}))$, where $\mathcal T$ denotes a vector transport.
  5. Compute the new approximate Hessian $H_{k+1}$ or its inverse $B_{k+1}$.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • basis=DefaultOrthonormalBasis(): basis to use within each of the the tangent spaces to represent the Hessian (inverse) for the cases where it is stored in full (matrix) form.
  • cautious_update=false: whether or not to use the QuasiNewtonCautiousDirectionUpdate which wraps the direction_upate.
  • cautious_function=(x) -> x * 1e-4: a monotone increasing function for the cautious update that is zero at $x=0$ and strictly increasing at $0$
  • direction_update=InverseBFGS(): the AbstractQuasiNewtonUpdateRule to use.
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.For example grad_f(M,p) allocates, but grad_f!(M, X, p) computes the result in-place of X.
  • initial_operator= initial_scale*Matrix{Float64}(I, n, n): initial matrix to use in case the Hessian (inverse) approximation is stored as a full matrix, that is n=manifold_dimension(M). This matrix is only allocated for the full matrix case. See also initial_scale.
  • initial_scale=1.0: scale initial s to use in with $\frac{s⟨s_k,y_k⟩_{p_k}}{\lVert y_k\rVert_{p_k}}$ in the computation of the limited memory approach. see also initial_operator
  • memory_size=20: limited memory, number of $s_k, y_k$ to store. Set to a negative value to use a full memory (matrix) representation
  • nondescent_direction_behavior=:reinitialize_direction_update: specify how non-descent direction is handled. This can be
    • :step_towards_negative_gradient: the direction is replaced with negative gradient, a message is stored.
    • :ignore: the verification is not performed, so any computed direction is accepted. No message is stored.
    • :reinitialize_direction_update: discards operator state stored in direction update rules.
    • any other value performs the verification, keeps the direction but stores a message.
    A stored message can be displayed using DebugMessages.
  • project!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=WolfePowellLinesearch(retraction_method, vector_transport_method): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(max(1000, memory_size))|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

Background

The aim is to minimize a real-valued function on a Riemannian manifold, that is

\[\min f(x), \quad x ∈ \mathcal{M}.\]

Riemannian quasi-Newtonian methods are as generalizations of their Euclidean counterparts Riemannian line search methods. These methods determine a search direction $η_k ∈ T_{x_k} \mathcal{M}$ at the current iterate $x_k$ and a suitable stepsize $α_k$ along $\gamma(α) = R_{x_k}(α η_k)$, where $R: T \mathcal{M} →\mathcal{M}$ is a retraction. The next iterate is obtained by

\[x_{k+1} = R_{x_k}(α_k η_k).\]

In quasi-Newton methods, the search direction is given by

\[η_k = -{\mathcal{H}_k}^{-1}[\operatorname{grad}f (x_k)] = -\mathcal{B}_k [\operatorname{grad} (x_k)],\]

where $\mathcal{H}_k : T_{x_k} \mathcal{M} →T_{x_k} \mathcal{M}$ is a positive definite self-adjoint operator, which approximates the action of the Hessian $\operatorname{Hess} f (x_k)[⋅]$ and $\mathcal{B}_k = {\mathcal{H}_k}^{-1}$. The idea of quasi-Newton methods is instead of creating a complete new approximation of the Hessian operator $\operatorname{Hess} f(x_{k+1})$ or its inverse at every iteration, the previous operator $\mathcal{H}_k$ or $\mathcal{B}_k$ is updated by a convenient formula using the obtained information about the curvature of the objective function during the iteration. The resulting operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ acts on the tangent space $T_{x_{k+1}} \mathcal{M}$ of the freshly computed iterate $x_{k+1}$. In order to get a well-defined method, the following requirements are placed on the new operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ that is created by an update. Since the Hessian $\operatorname{Hess} f(x_{k+1})$ is a self-adjoint operator on the tangent space $T_{x_{k+1}} \mathcal{M}$, and $\mathcal{H}_{k+1}$ approximates it, one requirement is, that $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ is also self-adjoint on $T_{x_{k+1}} \mathcal{M}$. In order to achieve a steady descent, the next requirement is that $η_k$ is a descent direction in each iteration. Hence a further requirement is that $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ is a positive definite operator on $T_{x_{k+1}} \mathcal{M}$. In order to get information about the curvature of the objective function into the new operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$, the last requirement is a form of a Riemannian quasi-Newton equation:

\[\mathcal{H}_{k+1} [T_{x_k \rightarrow x_{k+1}}({R_{x_k}}^{-1}(x_{k+1}))] = \operatorname{grad}(x_{k+1}) - T_{x_k \rightarrow x_{k+1}}(\operatorname{grad}f(x_k))\]

or

\[\mathcal{B}_{k+1} [\operatorname{grad}f(x_{k+1}) - T_{x_k \rightarrow x_{k+1}}(\operatorname{grad}f(x_k))] = T_{x_k \rightarrow x_{k+1}}({R_{x_k}}^{-1}(x_{k+1}))\]

where $T_{x_k \rightarrow x_{k+1}} : T_{x_k} \mathcal{M} →T_{x_{k+1}} \mathcal{M}$ and the chosen retraction $R$ is the associated retraction of $T$. Note that, of course, not all updates in all situations meet these conditions in every iteration. For specific quasi-Newton updates, the fulfilment of the Riemannian curvature condition, which requires that

\[g_{x_{k+1}}(s_k, y_k) > 0\]

holds, is a requirement for the inheritance of the self-adjointness and positive definiteness of the $\mathcal{H}_k$ or $\mathcal{B}_k$ to the operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$. Unfortunately, the fulfilment of the Riemannian curvature condition is not given by a step size $\alpha_k > 0$ that satisfies the generalized Wolfe conditions. However, to create a positive definite operator $\mathcal{H}_{k+1}$ or $\mathcal{B}_{k+1}$ in each iteration, the so-called locking condition was introduced in [HGA15], which requires that the isometric vector transport $T^S$, which is used in the update formula, and its associate retraction $R$ fulfil

\[T^{S}{x, ξ_x}(ξ_x) = β T^{R}{x, ξ_x}(ξ_x), \quad β = \frac{\lVert ξ_x \rVert_x}{\lVert T^{R}{x, ξ_x}(ξ_x) \rVert_{R_{x}(ξ_x)}},\]

where $T^R$ is the vector transport by differentiated retraction. With the requirement that the isometric vector transport $T^S$ and its associated retraction $R$ satisfies the locking condition and using the tangent vector

\[y_k = {β_k}^{-1} \operatorname{grad}f(x_{k+1}) - T^{S}{x_k, α_k η_k}(\operatorname{grad}f(x_k)),\]

where

\[β_k = \frac{\lVert α_k η_k \rVert_{x_k}}{\lVert T^{R}{x_k, α_k η_k}(α_k η_k) \rVert_{x_{k+1}}},\]

in the update, it can be shown that choosing a stepsize $α_k > 0$ that satisfies the Riemannian Wolfe conditions leads to the fulfilment of the Riemannian curvature condition, which in turn implies that the operator generated by the updates is positive definite. In the following the specific operators are denoted in matrix notation and hence use $H_k$ and $B_k$, respectively.

Direction updates

In general there are different ways to compute a fixed AbstractQuasiNewtonUpdateRule. In general these are represented by

Manopt.QuasiNewtonMatrixDirectionUpdateType
QuasiNewtonMatrixDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate

The QuasiNewtonMatrixDirectionUpdate represent a quasi-Newton update rule, where the operator is stored as a matrix. A distinction is made between the update of the approximation of the Hessian, $H_k \mapsto H_{k+1}$, and the update of the approximation of the Hessian inverse, $B_k \mapsto B_{k+1}$. For the first case, the coordinates of the search direction $η_k$ with respect to a basis $\{b_i\}_{i=1}^{n}$ are determined by solving a linear system of equations

\[\text{Solve} \quad \hat{η_k} = - H_k \widehat{\operatorname{grad}f(x_k)},\]

where $H_k$ is the matrix representing the operator with respect to the basis $\{b_i\}_{i=1}^{n}$ and $\widehat{\operatorname{grad}} f(p_k)}$ represents the coordinates of the gradient of the objective function $f$ in $x_k$ with respect to the basis $\{b_i\}_{i=1}^{n}$. If a method is chosen where Hessian inverse is approximated, the coordinates of the search direction $η_k$ with respect to a basis $\{b_i\}_{i=1}^{n}$ are obtained simply by matrix-vector multiplication

\[\hat{η_k} = - B_k \widehat{\operatorname{grad}f(x_k)},\]

where $B_k$ is the matrix representing the operator with respect to the basis $\{b_i\}_{i=1}^{n}$ and \widehat{\operatorname{grad}} f(p_k)}. In the end, the search directionη_kis generated from the coordinates\hat{eta_k}and the vectors of the basis\{b_i\}_{i=1}^{n}in both variants. The [AbstractQuasiNewtonUpdateRule](@ref) indicates which quasi-Newton update rule is used. In all of them, the Euclidean update formula is used to generate the matrixH_{k+1}andB_{k+1}, and the basis\{b_i\}_{i=1}^{n}is transported into the upcoming tangent spaceT_{p_{k+1}} \mathcal M`, preferably with an isometric vector transport, or generated there.

Provided functors

  • (mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction
  • (η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η

Fields

  • basis: an AbstractBasis to use in the tangent spaces
  • matrix: the matrix which represents the approximating operator.
  • initial_scale: when initialising the update, a unit matrix is used as initial approximation, scaled by this factor
  • update: a AbstractQuasiNewtonUpdateRule.
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Constructor

QuasiNewtonMatrixDirectionUpdate(
     M::AbstractManifold,
     update,
     basis::B=DefaultOrthonormalBasis(),
     m=Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M));
     kwargs...
-)

Keyword arguments

Generate the Update rule with defaults from a manifold and the names corresponding to the fields.

See also

QuasiNewtonLimitedMemoryDirectionUpdate, QuasiNewtonCautiousDirectionUpdate, AbstractQuasiNewtonDirectionUpdate,

source
Manopt.QuasiNewtonLimitedMemoryDirectionUpdateType
QuasiNewtonLimitedMemoryDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate

This AbstractQuasiNewtonDirectionUpdate represents the limited-memory Riemannian BFGS update, where the approximating operator is represented by $m$ stored pairs of tangent vectors $\{\widehat{s}_i\}_{i=k-m}^{k-1} and \{\widehat{y}_i\}_{i=k-m}^{k-1} in the$k$-th iteration. For the calculation of the search direction$Xk$, the generalisation of the two-loop recursion is used (see [HuangGallivanAbsil:2015](@cite)), since it only requires inner products and linear combinations of tangent vectors in$T{pk}\mathcal M$. For that the stored pairs of tangent vectors$\widehat{s}i, \widehat{y}i$, the gradient$\operatorname{grad} f(pk)$of the objective function$f$in$p_k`` and the positive definite self-adjoint operator

\[\mathcal{B}^{(0)}_k[⋅] = \frac{g_{p_k}(s_{k-1}, y_{k-1})}{g_{p_k}(y_{k-1}, y_{k-1})} \; \mathrm{id}_{T_{p_k} \mathcal{M}}[⋅]\]

are used. The two-loop recursion can be understood as that the InverseBFGS update is executed $m$ times in a row on $\mathcal B^{(0)}_k[⋅]$ using the tangent vectors $\widehat{s}_i,\widehat{y}_i$, and in the same time the resulting operator $\mathcal B^{LRBFGS}_k [⋅]$ is directly applied on $\operatorname{grad}f(x_k)$. When updating there are two cases: if there is still free memory, $k < m$, the previously stored vector pairs $\widehat{s}_i,\widehat{y}_i$ have to be transported into the upcoming tangent space $T_{p_{k+1}}\mathcal M$. If there is no free memory, the oldest pair $\widehat{s}_i,\widehat{y}_i$ has to be discarded and then all the remaining vector pairs $\widehat{s}_i,\widehat{y}_i$ are transported into the tangent space $T_{p_{k+1}}\mathcal M$. After that the new values $s_k = \widehat{s}_k = T^{S}_{x_k, α_k η_k}(α_k η_k)$ and $y_k = \widehat{y}_k$ are stored at the beginning. This process ensures that new information about the objective function is always included and the old, probably no longer relevant, information is discarded.

Provided functors

  • (mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction
  • (η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η

Fields

  • memory_s; the set of the stored (and transported) search directions times step size $\{\widehat{s}_i\}_{i=k-m}^{k-1}$.
  • memory_y: set of the stored gradient differences $\{\widehat{y}_i\}_{i=k-m}^{k-1}$.
  • ξ: a variable used in the two-loop recursion.
  • ρ; a variable used in the two-loop recursion.
  • initial_scale: initial scaling of the Hessian
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • message: a string containing a potential warning that might have appeared
  • project!: a function to stabilize the update by projecting on the tangent space

Constructor

QuasiNewtonLimitedMemoryDirectionUpdate(
+)

Keyword arguments

Generate the Update rule with defaults from a manifold and the names corresponding to the fields.

See also

QuasiNewtonLimitedMemoryDirectionUpdate, QuasiNewtonCautiousDirectionUpdate, AbstractQuasiNewtonDirectionUpdate,

source
Manopt.QuasiNewtonLimitedMemoryDirectionUpdateType
QuasiNewtonLimitedMemoryDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate

This AbstractQuasiNewtonDirectionUpdate represents the limited-memory Riemannian BFGS update, where the approximating operator is represented by $m$ stored pairs of tangent vectors $\{\widehat{s}_i\}_{i=k-m}^{k-1} and \{\widehat{y}_i\}_{i=k-m}^{k-1} in the$k$-th iteration. For the calculation of the search direction$Xk$, the generalisation of the two-loop recursion is used (see [HuangGallivanAbsil:2015](@cite)), since it only requires inner products and linear combinations of tangent vectors in$T{pk}\mathcal M$. For that the stored pairs of tangent vectors$\widehat{s}i, \widehat{y}i$, the gradient$\operatorname{grad} f(pk)$of the objective function$f$in$p_k`` and the positive definite self-adjoint operator

\[\mathcal{B}^{(0)}_k[⋅] = \frac{g_{p_k}(s_{k-1}, y_{k-1})}{g_{p_k}(y_{k-1}, y_{k-1})} \; \mathrm{id}_{T_{p_k} \mathcal{M}}[⋅]\]

are used. The two-loop recursion can be understood as that the InverseBFGS update is executed $m$ times in a row on $\mathcal B^{(0)}_k[⋅]$ using the tangent vectors $\widehat{s}_i,\widehat{y}_i$, and in the same time the resulting operator $\mathcal B^{LRBFGS}_k [⋅]$ is directly applied on $\operatorname{grad}f(x_k)$. When updating there are two cases: if there is still free memory, $k < m$, the previously stored vector pairs $\widehat{s}_i,\widehat{y}_i$ have to be transported into the upcoming tangent space $T_{p_{k+1}}\mathcal M$. If there is no free memory, the oldest pair $\widehat{s}_i,\widehat{y}_i$ has to be discarded and then all the remaining vector pairs $\widehat{s}_i,\widehat{y}_i$ are transported into the tangent space $T_{p_{k+1}}\mathcal M$. After that the new values $s_k = \widehat{s}_k = T^{S}_{x_k, α_k η_k}(α_k η_k)$ and $y_k = \widehat{y}_k$ are stored at the beginning. This process ensures that new information about the objective function is always included and the old, probably no longer relevant, information is discarded.

Provided functors

  • (mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction
  • (η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η

Fields

  • memory_s; the set of the stored (and transported) search directions times step size $\{\widehat{s}_i\}_{i=k-m}^{k-1}$.
  • memory_y: set of the stored gradient differences $\{\widehat{y}_i\}_{i=k-m}^{k-1}$.
  • ξ: a variable used in the two-loop recursion.
  • ρ; a variable used in the two-loop recursion.
  • initial_scale: initial scaling of the Hessian
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
  • message: a string containing a potential warning that might have appeared
  • project!: a function to stabilize the update by projecting on the tangent space

Constructor

QuasiNewtonLimitedMemoryDirectionUpdate(
     M::AbstractManifold,
     x,
     update::AbstractQuasiNewtonUpdateRule,
@@ -15,16 +15,16 @@
     initial_vector=zero_vector(M,x),
     initial_scale::Real=1.0
     project!=copyto!
-)

See also

InverseBFGS QuasiNewtonCautiousDirectionUpdate AbstractQuasiNewtonDirectionUpdate

source
Manopt.QuasiNewtonCautiousDirectionUpdateType
QuasiNewtonCautiousDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate

These AbstractQuasiNewtonDirectionUpdates represent any quasi-Newton update rule, which are based on the idea of a so-called cautious update. The search direction is calculated as given in QuasiNewtonMatrixDirectionUpdate or QuasiNewtonLimitedMemoryDirectionUpdate, butut the update then is only executed if

\[\frac{g_{x_{k+1}}(y_k,s_k)}{\lVert s_k \rVert^{2}_{x_{k+1}}} ≥ θ(\lVert \operatorname{grad}f(x_k) \rVert_{x_k}),\]

is satisfied, where $θ$ is a monotone increasing function satisfying $θ(0) = 0$ and $θ$ is strictly increasing at $0$. If this is not the case, the corresponding update is skipped, which means that for QuasiNewtonMatrixDirectionUpdate the matrix $H_k$ or $B_k$ is not updated. The basis $\{b_i\}^{n}_{i=1}$ is nevertheless transported into the upcoming tangent space $T_{x_{k+1}} \mathcal{M}$, and for QuasiNewtonLimitedMemoryDirectionUpdate neither the oldest vector pair $\{ \widetilde{s}_{k−m}, \widetilde{y}_{k−m}\}$ is discarded nor the newest vector pair $\{ \widetilde{s}_{k}, \widetilde{y}_{k}\}$ is added into storage, but all stored vector pairs $\{ \widetilde{s}_i, \widetilde{y}_i\}_{i=k-m}^{k-1}$ are transported into the tangent space $T_{x_{k+1}} \mathcal{M}$. If InverseBFGS or InverseBFGS is chosen as update, then the resulting method follows the method of [HAG18], taking into account that the corresponding step size is chosen.

Provided functors

  • (mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction
  • (η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η

Fields

Constructor

QuasiNewtonCautiousDirectionUpdate(U::QuasiNewtonMatrixDirectionUpdate; θ = identity)
-QuasiNewtonCautiousDirectionUpdate(U::QuasiNewtonLimitedMemoryDirectionUpdate; θ = identity)

Generate a cautious update for either a matrix based or a limited memory based update rule.

See also

QuasiNewtonMatrixDirectionUpdate QuasiNewtonLimitedMemoryDirectionUpdate

source
Manopt.initialize_update!Function
initialize_update!(s::AbstractQuasiNewtonDirectionUpdate)

Initialize direction update. By default no change is made.

source
initialize_update!(d::QuasiNewtonLimitedMemoryDirectionUpdate)

Initialize the limited memory direction update by emptying the memory buffers.

source

Hessian update rules

Using

the following update formulae for either $H_{k+1}$ or $B_{k+1}$ are available.

Manopt.BFGSType
BFGS <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian BFGS update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{H}_k^\mathrm{BFGS}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{BFGS}_{k+1} = \widetilde{H}^\mathrm{BFGS}_k + \frac{y_k y^{\mathrm{T}}_k }{s^{\mathrm{T}}_k y_k} - \frac{\widetilde{H}^\mathrm{BFGS}_k s_k s^{\mathrm{T}}_k \widetilde{H}^\mathrm{BFGS}_k }{s^{\mathrm{T}}_k \widetilde{H}^\mathrm{BFGS}_k s_k}\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad -\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

source
Manopt.DFPType
DFP <: AbstractQuasiNewtonUpdateRule

indicates in an AbstractQuasiNewtonDirectionUpdate that the Riemannian DFP update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{H}_k^\mathrm{DFP}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{DFP}_{k+1} = \Bigl( +)

See also

InverseBFGS QuasiNewtonCautiousDirectionUpdate AbstractQuasiNewtonDirectionUpdate

source
Manopt.QuasiNewtonCautiousDirectionUpdateType
QuasiNewtonCautiousDirectionUpdate <: AbstractQuasiNewtonDirectionUpdate

These AbstractQuasiNewtonDirectionUpdates represent any quasi-Newton update rule, which are based on the idea of a so-called cautious update. The search direction is calculated as given in QuasiNewtonMatrixDirectionUpdate or QuasiNewtonLimitedMemoryDirectionUpdate, butut the update then is only executed if

\[\frac{g_{x_{k+1}}(y_k,s_k)}{\lVert s_k \rVert^{2}_{x_{k+1}}} ≥ θ(\lVert \operatorname{grad}f(x_k) \rVert_{x_k}),\]

is satisfied, where $θ$ is a monotone increasing function satisfying $θ(0) = 0$ and $θ$ is strictly increasing at $0$. If this is not the case, the corresponding update is skipped, which means that for QuasiNewtonMatrixDirectionUpdate the matrix $H_k$ or $B_k$ is not updated. The basis $\{b_i\}^{n}_{i=1}$ is nevertheless transported into the upcoming tangent space $T_{x_{k+1}} \mathcal{M}$, and for QuasiNewtonLimitedMemoryDirectionUpdate neither the oldest vector pair $\{ \widetilde{s}_{k−m}, \widetilde{y}_{k−m}\}$ is discarded nor the newest vector pair $\{ \widetilde{s}_{k}, \widetilde{y}_{k}\}$ is added into storage, but all stored vector pairs $\{ \widetilde{s}_i, \widetilde{y}_i\}_{i=k-m}^{k-1}$ are transported into the tangent space $T_{x_{k+1}} \mathcal{M}$. If InverseBFGS or InverseBFGS is chosen as update, then the resulting method follows the method of [HAG18], taking into account that the corresponding step size is chosen.

Provided functors

  • (mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction
  • (η, mp::AbstractManoptproblem, st::QuasiNewtonState) -> η to compute the update direction in-place of η

Fields

Constructor

QuasiNewtonCautiousDirectionUpdate(U::QuasiNewtonMatrixDirectionUpdate; θ = identity)
+QuasiNewtonCautiousDirectionUpdate(U::QuasiNewtonLimitedMemoryDirectionUpdate; θ = identity)

Generate a cautious update for either a matrix based or a limited memory based update rule.

See also

QuasiNewtonMatrixDirectionUpdate QuasiNewtonLimitedMemoryDirectionUpdate

source
Manopt.initialize_update!Function
initialize_update!(s::AbstractQuasiNewtonDirectionUpdate)

Initialize direction update. By default no change is made.

source
initialize_update!(d::QuasiNewtonLimitedMemoryDirectionUpdate)

Initialize the limited memory direction update by emptying the memory buffers.

source

Hessian update rules

Using

the following update formulae for either $H_{k+1}$ or $B_{k+1}$ are available.

Manopt.BFGSType
BFGS <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian BFGS update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{H}_k^\mathrm{BFGS}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{BFGS}_{k+1} = \widetilde{H}^\mathrm{BFGS}_k + \frac{y_k y^{\mathrm{T}}_k }{s^{\mathrm{T}}_k y_k} - \frac{\widetilde{H}^\mathrm{BFGS}_k s_k s^{\mathrm{T}}_k \widetilde{H}^\mathrm{BFGS}_k }{s^{\mathrm{T}}_k \widetilde{H}^\mathrm{BFGS}_k s_k}\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad +\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

source
Manopt.DFPType
DFP <: AbstractQuasiNewtonUpdateRule

indicates in an AbstractQuasiNewtonDirectionUpdate that the Riemannian DFP update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{H}_k^\mathrm{DFP}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{DFP}_{k+1} = \Bigl( \mathrm{id}_{T_{x_{k+1}} \mathcal{M}} - \frac{y_k s^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k} \Bigr) \widetilde{H}^\mathrm{DFP}_k \Bigl( \mathrm{id}_{T_{x_{k+1}} \mathcal{M}} - \frac{s_k y^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k} \Bigr) + \frac{y_k y^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k}\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad -\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

source
Manopt.BroydenType
Broyden <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of BFGS and DFP.

Denote by $\widetilde{H}_k^\mathrm{Br}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{Br}_{k+1} = \widetilde{H}^\mathrm{Br}_k +\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

source
Manopt.BroydenType
Broyden <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of BFGS and DFP.

Denote by $\widetilde{H}_k^\mathrm{Br}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{Br}_{k+1} = \widetilde{H}^\mathrm{Br}_k - \frac{\widetilde{H}^\mathrm{Br}_k s_k s^{\mathrm{T}}_k \widetilde{H}^\mathrm{Br}_k}{s^{\mathrm{T}}_k \widetilde{H}^\mathrm{Br}_k s_k} + \frac{y_k y^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k} + φ_k s^{\mathrm{T}}_k \widetilde{H}^\mathrm{Br}_k s_k \Bigl( @@ -33,22 +33,22 @@ \Bigl( \frac{y_k}{s^{\mathrm{T}}_k y_k} - \frac{\widetilde{H}^\mathrm{Br}_k s_k}{s^{\mathrm{T}}_k \widetilde{H}^\mathrm{Br}_k s_k} \Bigr)^{\mathrm{T}}\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad -\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively, and $φ_k$ is the Broyden factor which is :constant by default but can also be set to :Davidon.

Constructor

Broyden(φ, update_rule::Symbol = :constant)
source
Manopt.SR1Type
SR1 <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian SR1 update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{H}_k^\mathrm{SR1}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{SR1}_{k+1} = \widetilde{H}^\mathrm{SR1}_k +\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively, and $φ_k$ is the Broyden factor which is :constant by default but can also be set to :Davidon.

Constructor

Broyden(φ, update_rule::Symbol = :constant)
source
Manopt.SR1Type
SR1 <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian SR1 update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{H}_k^\mathrm{SR1}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[H^\mathrm{SR1}_{k+1} = \widetilde{H}^\mathrm{SR1}_k + \frac{ (y_k - \widetilde{H}^\mathrm{SR1}_k s_k) (y_k - \widetilde{H}^\mathrm{SR1}_k s_k)^{\mathrm{T}} }{ (y_k - \widetilde{H}^\mathrm{SR1}_k s_k)^{\mathrm{T}} s_k }\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad -\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

This method can be stabilized by only performing the update if denominator is larger than $r\lVert s_k\rVert_{x_{k+1}}\lVert y_k - \widetilde{H}^\mathrm{SR1}_k s_k \rVert_{x_{k+1}}$ for some $r>0$. For more details, see Section 6.2 in [NW06].

Constructor

SR1(r::Float64=-1.0)

Generate the SR1 update.

source
Manopt.InverseBFGSType
InverseBFGS <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian BFGS update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{B}_k^\mathrm{BFGS}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{BFGS}_{k+1} = \Bigl( +\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

This method can be stabilized by only performing the update if denominator is larger than $r\lVert s_k\rVert_{x_{k+1}}\lVert y_k - \widetilde{H}^\mathrm{SR1}_k s_k \rVert_{x_{k+1}}$ for some $r>0$. For more details, see Section 6.2 in [NW06].

Constructor

SR1(r::Float64=-1.0)

Generate the SR1 update.

source
Manopt.InverseBFGSType
InverseBFGS <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian BFGS update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{B}_k^\mathrm{BFGS}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{BFGS}_{k+1} = \Bigl( \mathrm{id}_{T_{x_{k+1}} \mathcal{M}} - \frac{s_k y^{\mathrm{T}}_k }{s^{\mathrm{T}}_k y_k} \Bigr) \widetilde{B}^\mathrm{BFGS}_k \Bigl( \mathrm{id}_{T_{x_{k+1}} \mathcal{M}} - \frac{y_k s^{\mathrm{T}}_k }{s^{\mathrm{T}}_k y_k} \Bigr) + \frac{s_k s^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k}\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad -\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

source
Manopt.InverseDFPType
InverseDFP <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian DFP update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{B}_k^\mathrm{DFP}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{DFP}_{k+1} = \widetilde{B}^\mathrm{DFP}_k + \frac{s_k s^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k} +\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

source
Manopt.InverseDFPType
InverseDFP <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian DFP update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{B}_k^\mathrm{DFP}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{DFP}_{k+1} = \widetilde{B}^\mathrm{DFP}_k + \frac{s_k s^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k} - \frac{\widetilde{B}^\mathrm{DFP}_k y_k y^{\mathrm{T}}_k \widetilde{B}^\mathrm{DFP}_k}{y^{\mathrm{T}}_k \widetilde{B}^\mathrm{DFP}_k y_k}\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad -\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

source
Manopt.InverseBroydenType
InverseBroyden <: AbstractQuasiNewtonUpdateRule

Indicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of InverseBFGS and InverseDFP.

Denote by $\widetilde{H}_k^\mathrm{Br}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{Br}_{k+1} = \widetilde{B}^\mathrm{Br}_k +\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

source
Manopt.InverseBroydenType
InverseBroyden <: AbstractQuasiNewtonUpdateRule

Indicates in AbstractQuasiNewtonDirectionUpdate that the Riemannian Broyden update is used in the Riemannian quasi-Newton method, which is as a convex combination of InverseBFGS and InverseDFP.

Denote by $\widetilde{H}_k^\mathrm{Br}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{Br}_{k+1} = \widetilde{B}^\mathrm{Br}_k - \frac{\widetilde{B}^\mathrm{Br}_k y_k y^{\mathrm{T}}_k \widetilde{B}^\mathrm{Br}_k}{y^{\mathrm{T}}_k \widetilde{B}^\mathrm{Br}_k y_k} + \frac{s_k s^{\mathrm{T}}_k}{s^{\mathrm{T}}_k y_k} + φ_k y^{\mathrm{T}}_k \widetilde{B}^\mathrm{Br}_k y_k @@ -57,10 +57,10 @@ \Bigr) \Bigl( \frac{s_k}{s^{\mathrm{T}}_k y_k} - \frac{\widetilde{B}^\mathrm{Br}_k y_k}{y^{\mathrm{T}}_k \widetilde{B}^\mathrm{Br}_k y_k} \Bigr)^{\mathrm{T}}\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad -\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively, and $φ_k$ is the Broyden factor which is :constant by default but can also be set to :Davidon.

Constructor

InverseBroyden(φ, update_rule::Symbol = :constant)
source
Manopt.InverseSR1Type
InverseSR1 <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian SR1 update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{B}_k^\mathrm{SR1}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{SR1}_{k+1} = \widetilde{B}^\mathrm{SR1}_k +\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively, and $φ_k$ is the Broyden factor which is :constant by default but can also be set to :Davidon.

Constructor

InverseBroyden(φ, update_rule::Symbol = :constant)
source
Manopt.InverseSR1Type
InverseSR1 <: AbstractQuasiNewtonUpdateRule

indicates in AbstractQuasiNewtonDirectionUpdate that the inverse Riemannian SR1 update is used in the Riemannian quasi-Newton method.

Denote by $\widetilde{B}_k^\mathrm{SR1}$ the operator concatenated with a vector transport and its inverse before and after to act on $x_{k+1} = R_{x_k}(α_k η_k)$. Then the update formula reads

\[B^\mathrm{SR1}_{k+1} = \widetilde{B}^\mathrm{SR1}_k + \frac{ (s_k - \widetilde{B}^\mathrm{SR1}_k y_k) (s_k - \widetilde{B}^\mathrm{SR1}_k y_k)^{\mathrm{T}} }{ (s_k - \widetilde{B}^\mathrm{SR1}_k y_k)^{\mathrm{T}} y_k }\]

where $s_k$ and $y_k$ are the coordinate vectors with respect to the current basis (from QuasiNewtonState) of

\[T^{S}_{x_k, α_k η_k}(α_k η_k) \quad\text{and}\quad -\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

This method can be stabilized by only performing the update if denominator is larger than $r\lVert y_k\rVert_{x_{k+1}}\lVert s_k - \widetilde{H}^\mathrm{SR1}_k y_k \rVert_{x_{k+1}}$ for some $r>0$. For more details, see Section 6.2 in [NW06].

Constructor

InverseSR1(r::Float64=-1.0)

Generate the InverseSR1.

source

State

The quasi Newton algorithm is based on a DefaultManoptProblem.

Manopt.QuasiNewtonStateType
QuasiNewtonState <: AbstractManoptSolverState

The AbstractManoptSolverState represent any quasi-Newton based method and stores all necessary fields.

Fields

  • direction_update: an AbstractQuasiNewtonDirectionUpdate rule.
  • η: the current update direction
  • nondescent_direction_behavior: a Symbol to specify how to handle direction that are not descent ones.
  • nondescent_direction_value: the value from the last inner product from checking for descent directions
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • p_old: the last iterate
  • sk: the current step
  • yk: the current gradient difference
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate
  • X_old: the last gradient

Constructor

QuasiNewtonState(M::AbstractManifold, p; kwargs...)

Generate the Quasi Newton state on the manifold M with start point p.

Keyword arguments

See also

quasi_Newton

source

Technical details

The quasi_Newton solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.
  • A vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= or vector_transport_method_dual= (for $\mathcal N$) does not have to be specified.
  • By default quasi Newton uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).
  • the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.
  • A copyto!(M, q, p) and copy(M,p) for points and similarly copy(M, p, X) for tangent vectors.
  • By default the tangent vector storing the gradient is initialized calling zero_vector(M,p).

Most Hessian approximations further require get_coordinates(M, p, X, b) with respect to the AbstractBasis b provided, which is DefaultOrthonormalBasis by default from the basis= keyword.

Literature

[HAG18]
W. Huang, P.-A. Absil and K. A. Gallivan. A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems. SIAM Journal on Optimization 28, 470–495 (2018).
[HGA15]
W. Huang, K. A. Gallivan and P.-A. Absil. A Broyden class of quasi-Newton methods for Riemannian optimization. SIAM Journal on Optimization 25, 1660–1685 (2015).
[NW06]
J. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).
+\operatorname{grad}f(x_{k+1}) - T^{S}_{x_k, α_k η_k}(\operatorname{grad}f(x_k)) ∈ T_{x_{k+1}} \mathcal{M},\]

respectively.

This method can be stabilized by only performing the update if denominator is larger than $r\lVert y_k\rVert_{x_{k+1}}\lVert s_k - \widetilde{H}^\mathrm{SR1}_k y_k \rVert_{x_{k+1}}$ for some $r>0$. For more details, see Section 6.2 in [NW06].

Constructor

InverseSR1(r::Float64=-1.0)

Generate the InverseSR1.

source

State

The quasi Newton algorithm is based on a DefaultManoptProblem.

Manopt.QuasiNewtonStateType
QuasiNewtonState <: AbstractManoptSolverState

The AbstractManoptSolverState represent any quasi-Newton based method and stores all necessary fields.

Fields

  • direction_update: an AbstractQuasiNewtonDirectionUpdate rule.
  • η: the current update direction
  • nondescent_direction_behavior: a Symbol to specify how to handle direction that are not descent ones.
  • nondescent_direction_value: the value from the last inner product from checking for descent directions
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • p_old: the last iterate
  • sk: the current step
  • yk: the current gradient difference
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$storing the gradient at the current iterate
  • X_old: the last gradient

Constructor

QuasiNewtonState(M::AbstractManifold, p; kwargs...)

Generate the Quasi Newton state on the manifold M with start point p.

Keyword arguments

See also

quasi_Newton

source

Technical details

The quasi_Newton solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.
  • A vector_transport_to!M, Y, p, X, q); it is recommended to set the default_vector_transport_method to a favourite retraction. If this default is set, a vector_transport_method= or vector_transport_method_dual= (for $\mathcal N$) does not have to be specified.
  • By default quasi Newton uses ArmijoLinesearch which requires max_stepsize(M) to be set and an implementation of inner(M, p, X).
  • the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.
  • A copyto!(M, q, p) and copy(M,p) for points and similarly copy(M, p, X) for tangent vectors.
  • By default the tangent vector storing the gradient is initialized calling zero_vector(M,p).

Most Hessian approximations further require get_coordinates(M, p, X, b) with respect to the AbstractBasis b provided, which is DefaultOrthonormalBasis by default from the basis= keyword.

Literature

[HAG18]
W. Huang, P.-A. Absil and K. A. Gallivan. A Riemannian BFGS method without differentiated retraction for nonconvex optimization problems. SIAM Journal on Optimization 28, 470–495 (2018).
[HGA15]
W. Huang, K. A. Gallivan and P.-A. Absil. A Broyden class of quasi-Newton methods for Riemannian optimization. SIAM Journal on Optimization 25, 1660–1685 (2015).
[NW06]
J. Nocedal and S. J. Wright. Numerical Optimization. 2 Edition (Springer, New York, 2006).
diff --git a/dev/solvers/stochastic_gradient_descent/index.html b/dev/solvers/stochastic_gradient_descent/index.html index 926708a391..31361639d2 100644 --- a/dev/solvers/stochastic_gradient_descent/index.html +++ b/dev/solvers/stochastic_gradient_descent/index.html @@ -2,8 +2,8 @@ Stochastic Gradient Descent · Manopt.jl

Stochastic gradient descent

Manopt.stochastic_gradient_descentFunction
stochastic_gradient_descent(M, grad_f, p=rand(M); kwargs...)
 stochastic_gradient_descent(M, msgo; kwargs...)
 stochastic_gradient_descent!(M, grad_f, p; kwargs...)
-stochastic_gradient_descent!(M, msgo, p; kwargs...)

perform a stochastic gradient descent. This can be perfomed in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • grad_f: a gradient function, that either returns a vector of the gradients or is a vector of gradient functions
  • p: a point on the manifold $\mathcal M$

alternatively to the gradient you can provide an ManifoldStochasticGradientObjective msgo, then using the cost= keyword does not have any effect since if so, the cost is already within the objective.

Keyword arguments

  • cost=missing: you can provide a cost function for example to track the function value
  • direction=StochasticGradient([zerovector](@extrefManifoldsBase.zerovector-Tuple{AbstractManifold, Any})(M, p)`)
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • evaluation_order=:Random: specify whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Linear) or the default :Random one.
  • order_type=:RandomOder: a type of ordering of gradient evaluations. Possible values are :RandomOrder, a :FixedPermutation, :LinearOrder
  • stopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled
  • stepsize=default_stepsize(M, StochasticGradientDescentState): a functor inheriting from Stepsize to determine a step size
  • order=[1:n]: the initial permutation, where n is the number of gradients in gradF.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.stochastic_gradient_descent!Function
stochastic_gradient_descent(M, grad_f, p=rand(M); kwargs...)
+stochastic_gradient_descent!(M, msgo, p; kwargs...)

perform a stochastic gradient descent. This can be perfomed in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • grad_f: a gradient function, that either returns a vector of the gradients or is a vector of gradient functions
  • p: a point on the manifold $\mathcal M$

alternatively to the gradient you can provide an ManifoldStochasticGradientObjective msgo, then using the cost= keyword does not have any effect since if so, the cost is already within the objective.

Keyword arguments

  • cost=missing: you can provide a cost function for example to track the function value
  • direction=StochasticGradient([zerovector](@extrefManifoldsBase.zerovector-Tuple{AbstractManifold, Any})(M, p)`)
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • evaluation_order=:Random: specify whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Linear) or the default :Random one.
  • order_type=:RandomOder: a type of ordering of gradient evaluations. Possible values are :RandomOrder, a :FixedPermutation, :LinearOrder
  • stopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled
  • stepsize=default_stepsize(M, StochasticGradientDescentState): a functor inheriting from Stepsize to determine a step size
  • order=[1:n]: the initial permutation, where n is the number of gradients in gradF.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source
Manopt.stochastic_gradient_descent!Function
stochastic_gradient_descent(M, grad_f, p=rand(M); kwargs...)
 stochastic_gradient_descent(M, msgo; kwargs...)
 stochastic_gradient_descent!(M, grad_f, p; kwargs...)
-stochastic_gradient_descent!(M, msgo, p; kwargs...)

perform a stochastic gradient descent. This can be perfomed in-place of p.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • grad_f: a gradient function, that either returns a vector of the gradients or is a vector of gradient functions
  • p: a point on the manifold $\mathcal M$

alternatively to the gradient you can provide an ManifoldStochasticGradientObjective msgo, then using the cost= keyword does not have any effect since if so, the cost is already within the objective.

Keyword arguments

  • cost=missing: you can provide a cost function for example to track the function value
  • direction=StochasticGradient([zerovector](@extrefManifoldsBase.zerovector-Tuple{AbstractManifold, Any})(M, p)`)
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • evaluation_order=:Random: specify whether to use a randomly permuted sequence (:FixedRandom:, a per cycle permuted sequence (:Linear) or the default :Random one.
  • order_type=:RandomOder: a type of ordering of gradient evaluations. Possible values are :RandomOrder, a :FixedPermutation, :LinearOrder
  • stopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled
  • stepsize=default_stepsize(M, StochasticGradientDescentState): a functor inheriting from Stepsize to determine a step size
  • order=[1:n]: the initial permutation, where n is the number of gradients in gradF.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.StochasticGradientDescentStateType
StochasticGradientDescentState <: AbstractGradientDescentSolverState

Store the following fields for a default stochastic gradient descent algorithm, see also ManifoldStochasticGradientObjective and stochastic_gradient_descent.

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • direction: a direction update to use
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • evaluation_order: specify whether to use a randomly permuted sequence (:FixedRandom:), a per cycle permuted sequence (:Linear) or the default, a :Random sequence.
  • order: stores the current permutation
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions

Constructor

StochasticGradientDescentState(M::AbstractManifold; kwargs...)

Create a StochasticGradientDescentState with start point p.

Keyword arguments

  • direction=StochasticGradientRule(M, [zerovector](@extrefManifoldsBase.zerovector-Tuple{AbstractManifold, Any})(M, p)`)
  • order_type=:RandomOrder`
  • order=Int[]: specify how to store the order of indices for the next epoche
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • stopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled
  • stepsize=default_stepsize(M, StochasticGradientDescentState): a functor inheriting from Stepsize to determine a step size
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector
source

Additionally, the options share a DirectionUpdateRule, so you can also apply MomentumGradient and AverageGradient here. The most inner one should always be.

Manopt.StochasticGradientFunction
StochasticGradient(; kwargs...)
-StochasticGradient(M::AbstractManifold; kwargs...)

Keyword arguments

  • initial_gradient=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
Info

This function generates a ManifoldDefaultsFactory for StochasticGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source

which internally uses

Manopt.AbstractGradientGroupDirectionRuleType
AbstractStochasticGradientDescentSolverState <: AbstractManoptSolverState

A generic type for all options related to gradient descent methods working with parts of the total gradient

source
Manopt.StochasticGradientRuleType
StochasticGradientRule<: AbstractGradientGroupDirectionRule

Create a functor (problem, state k) -> (s,X) to evaluate the stochatsic gradient, that is chose a random index from the state and use the internal field for evaluation of the gradient in-place.

The default gradient processor, which just evaluates the (stochastic) gradient or a subset thereof.

Fields

  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$

Constructor

StochasticGradientRule(M::AbstractManifold; p=rand(M), X=zero_vector(M, p))

Initialize the stochastic gradient processor with tangent vector type of X, where both M and p are just help variables.

See also

stochastic_gradient_descent, [StochasticGradient])@ref)

source

Technical details

The stochastic_gradient_descent solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.
+stochastic_gradient_descent!(M, msgo, p; kwargs...)

perform a stochastic gradient descent. This can be perfomed in-place of p.

Input

alternatively to the gradient you can provide an ManifoldStochasticGradientObjective msgo, then using the cost= keyword does not have any effect since if so, the cost is already within the objective.

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

source

State

Manopt.StochasticGradientDescentStateType
StochasticGradientDescentState <: AbstractGradientDescentSolverState

Store the following fields for a default stochastic gradient descent algorithm, see also ManifoldStochasticGradientObjective and stochastic_gradient_descent.

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • direction: a direction update to use
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • evaluation_order: specify whether to use a randomly permuted sequence (:FixedRandom:), a per cycle permuted sequence (:Linear) or the default, a :Random sequence.
  • order: stores the current permutation
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions

Constructor

StochasticGradientDescentState(M::AbstractManifold; kwargs...)

Create a StochasticGradientDescentState with start point p.

Keyword arguments

  • direction=StochasticGradientRule(M, [zerovector](@extrefManifoldsBase.zerovector-Tuple{AbstractManifold, Any})(M, p)`)
  • order_type=:RandomOrder`
  • order=Int[]: specify how to store the order of indices for the next epoche
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • stopping_criterion=StopAfterIteration(1000): a functor indicating that the stopping criterion is fulfilled
  • stepsize=default_stepsize(M, StochasticGradientDescentState): a functor inheriting from Stepsize to determine a step size
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector
source

Additionally, the options share a DirectionUpdateRule, so you can also apply MomentumGradient and AverageGradient here. The most inner one should always be.

Manopt.StochasticGradientFunction
StochasticGradient(; kwargs...)
+StochasticGradient(M::AbstractManifold; kwargs...)

Keyword arguments

  • initial_gradient=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
Info

This function generates a ManifoldDefaultsFactory for StochasticGradientRule. For default values, that depend on the manifold, this factory postpones the construction until the manifold from for example a corresponding AbstractManoptSolverState is available.

source

which internally uses

Manopt.AbstractGradientGroupDirectionRuleType
AbstractStochasticGradientDescentSolverState <: AbstractManoptSolverState

A generic type for all options related to gradient descent methods working with parts of the total gradient

source
Manopt.StochasticGradientRuleType
StochasticGradientRule<: AbstractGradientGroupDirectionRule

Create a functor (problem, state k) -> (s,X) to evaluate the stochatsic gradient, that is chose a random index from the state and use the internal field for evaluation of the gradient in-place.

The default gradient processor, which just evaluates the (stochastic) gradient or a subset thereof.

Fields

  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$

Constructor

StochasticGradientRule(M::AbstractManifold; p=rand(M), X=zero_vector(M, p))

Initialize the stochastic gradient processor with tangent vector type of X, where both M and p are just help variables.

See also

stochastic_gradient_descent, [StochasticGradient])@ref)

source

Technical details

The stochastic_gradient_descent solver requires the following functions of a manifold to be available

diff --git a/dev/solvers/subgradient/index.html b/dev/solvers/subgradient/index.html index 7cc691d1a0..94fe54ee2c 100644 --- a/dev/solvers/subgradient/index.html +++ b/dev/solvers/subgradient/index.html @@ -2,7 +2,7 @@ Subgradient method · Manopt.jl

Subgradient method

Manopt.subgradient_methodFunction
subgradient_method(M, f, ∂f, p=rand(M); kwargs...)
 subgradient_method(M, sgo, p=rand(M); kwargs...)
 subgradient_method!(M, f, ∂f, p; kwargs...)
-subgradient_method!(M, sgo, p; kwargs...)

perform a subgradient method $p^{(k+1)} = \operatorname{retr}\bigl(p^{(k)}, s^{(k)}∂f(p^{(k)})\bigr)$, where $\operatorname{retr}$ is a retraction, $s^{(k)}$ is a step size.

Though the subgradient might be set valued, the argument ∂f should always return one element from the subgradient, but not necessarily deterministic. For more details see [FO98].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • ∂f: the (sub)gradient $∂ f: \mathcal M → T\mathcal M$ of f
  • p: a point on the manifold $\mathcal M$

alternatively to f and ∂f a ManifoldSubgradientObjective sgo can be provided.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=default_stepsize(M, SubGradientMethodState): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector

and the ones that are passed to decorate_state! for decorators.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source
Manopt.subgradient_method!Function
subgradient_method(M, f, ∂f, p=rand(M); kwargs...)
+subgradient_method!(M, sgo, p; kwargs...)

perform a subgradient method $p^{(k+1)} = \operatorname{retr}\bigl(p^{(k)}, s^{(k)}∂f(p^{(k)})\bigr)$, where $\operatorname{retr}$ is a retraction, $s^{(k)}$ is a step size.

Though the subgradient might be set valued, the argument ∂f should always return one element from the subgradient, but not necessarily deterministic. For more details see [FO98].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • ∂f: the (sub)gradient $∂ f: \mathcal M → T\mathcal M$ of f
  • p: a point on the manifold $\mathcal M$

alternatively to f and ∂f a ManifoldSubgradientObjective sgo can be provided.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=default_stepsize(M, SubGradientMethodState): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector

and the ones that are passed to decorate_state! for decorators.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source
Manopt.subgradient_method!Function
subgradient_method(M, f, ∂f, p=rand(M); kwargs...)
 subgradient_method(M, sgo, p=rand(M); kwargs...)
 subgradient_method!(M, f, ∂f, p; kwargs...)
-subgradient_method!(M, sgo, p; kwargs...)

perform a subgradient method $p^{(k+1)} = \operatorname{retr}\bigl(p^{(k)}, s^{(k)}∂f(p^{(k)})\bigr)$, where $\operatorname{retr}$ is a retraction, $s^{(k)}$ is a step size.

Though the subgradient might be set valued, the argument ∂f should always return one element from the subgradient, but not necessarily deterministic. For more details see [FO98].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • ∂f: the (sub)gradient $∂ f: \mathcal M → T\mathcal M$ of f
  • p: a point on the manifold $\mathcal M$

alternatively to f and ∂f a ManifoldSubgradientObjective sgo can be provided.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize=default_stepsize(M, SubGradientMethodState): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector

and the ones that are passed to decorate_state! for decorators.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source

State

Manopt.SubGradientMethodStateType
SubGradientMethodState <: AbstractManoptSolverState

stores option values for a subgradient_method solver

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • p_star: optimal value
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • X: the current element from the possible subgradients at p that was last evaluated.

Constructor

SubGradientMethodState(M::AbstractManifold; kwargs...)

Initialise the Subgradient method state

Keyword arguments

  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • stepsize=default_stepsize(M, SubGradientMethodState): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector
source

For DebugActions and RecordActions to record (sub)gradient, its norm and the step sizes, see the gradient descent actions.

Technical details

The subgradient_method solver requires the following functions of a manifold to be available

  • A retract!(M, q, p, X); it is recommended to set the default_retraction_method to a favourite retraction. If this default is set, a retraction_method= does not have to be specified.

Literature

[FO98]
O. Ferreira and P. R. Oliveira. Subgradient algorithm on Riemannian manifolds. Journal of Optimization Theory and Applications 97, 93–104 (1998).
+subgradient_method!(M, sgo, p; kwargs...)

perform a subgradient method $p^{(k+1)} = \operatorname{retr}\bigl(p^{(k)}, s^{(k)}∂f(p^{(k)})\bigr)$, where $\operatorname{retr}$ is a retraction, $s^{(k)}$ is a step size.

Though the subgradient might be set valued, the argument ∂f should always return one element from the subgradient, but not necessarily deterministic. For more details see [FO98].

Input

alternatively to f and ∂f a ManifoldSubgradientObjective sgo can be provided.

Keyword arguments

and the ones that are passed to decorate_state! for decorators.

Output

the obtained (approximate) minimizer $p^*$, see get_solver_return for details

source

State

Manopt.SubGradientMethodStateType
SubGradientMethodState <: AbstractManoptSolverState

stores option values for a subgradient_method solver

Fields

  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • p_star: optimal value
  • retraction_method::AbstractRetractionMethod: a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stepsize::Stepsize: a functor inheriting from Stepsize to determine a step size
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • X: the current element from the possible subgradients at p that was last evaluated.

Constructor

SubGradientMethodState(M::AbstractManifold; kwargs...)

Initialise the Subgradient method state

Keyword arguments

  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • stepsize=default_stepsize(M, SubGradientMethodState): a functor inheriting from Stepsize to determine a step size
  • stopping_criterion=StopAfterIteration(5000): a functor indicating that the stopping criterion is fulfilled
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector
source

For DebugActions and RecordActions to record (sub)gradient, its norm and the step sizes, see the gradient descent actions.

Technical details

The subgradient_method solver requires the following functions of a manifold to be available

Literature

[FO98]
O. Ferreira and P. R. Oliveira. Subgradient algorithm on Riemannian manifolds. Journal of Optimization Theory and Applications 97, 93–104 (1998).
diff --git a/dev/solvers/truncated_conjugate_gradient_descent/index.html b/dev/solvers/truncated_conjugate_gradient_descent/index.html index 57e5a13982..2527b767c9 100644 --- a/dev/solvers/truncated_conjugate_gradient_descent/index.html +++ b/dev/solvers/truncated_conjugate_gradient_descent/index.html @@ -15,7 +15,7 @@ \operatorname*{arg\,min}_{Y ∈ T_p\mathcal{M}}&\ m_p(Y) = f(p) + ⟨\operatorname{grad}f(p), Y⟩_p + \frac{1}{2} ⟨\mathcal{H}_p[Y], Y⟩_p\\ \text{such that}& \ \lVert Y \rVert_p ≤ Δ -\end{align*}\]

on a manifold $\mathcal M$ by using the Steihaug-Toint truncated conjugate-gradient (tCG) method. This can be done inplace of X.

For a description of the algorithm and theorems offering convergence guarantees, see [ABG06, CGT00].

Input

Instead of the three functions, you either provide a ManifoldHessianObjective mho which is then used to build the trust region model, or a TrustRegionModelObjective trmo directly.

Keyword arguments

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

See also

trust_regions

source
Manopt.truncated_conjugate_gradient_descent!Function
truncated_conjugate_gradient_descent(M, f, grad_f, Hess_f, p=rand(M), X=rand(M); vector_at=p);
+\end{align*}\]

on a manifold $\mathcal M$ by using the Steihaug-Toint truncated conjugate-gradient (tCG) method. This can be done inplace of X.

For a description of the algorithm and theorems offering convergence guarantees, see [ABG06, CGT00].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$
  • X: a tangent vector at the point $p$ on the manifold $\mathcal M$

Instead of the three functions, you either provide a ManifoldHessianObjective mho which is then used to build the trust region model, or a TrustRegionModelObjective trmo directly.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • preconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.
  • θ=1.0: the superlinear convergence target rate of $1+θ$
  • κ=0.1: the linear convergence target rate.
  • project!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • randomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(manifold_dimension(base_manifold(Tpm)))|StopWhenResidualIsReducedByFactorOrPower(; κ=κ, θ=θ)|StopWhenTrustRegionIsExceeded()|StopWhenCurvatureIsNegative()|StopWhenModelIncreased(): a functor indicating that the stopping criterion is fulfilled
  • trust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

See also

trust_regions

source
Manopt.truncated_conjugate_gradient_descent!Function
truncated_conjugate_gradient_descent(M, f, grad_f, Hess_f, p=rand(M), X=rand(M); vector_at=p);
     kwargs...
 )
 truncated_conjugate_gradient_descent(M, mho::ManifoldHessianObjective, p=rand(M), X=rand(M; vector_at=p);
@@ -27,4 +27,4 @@
 \operatorname*{arg\,min}_{Y  ∈  T_p\mathcal{M}}&\ m_p(Y) = f(p) +
 ⟨\operatorname{grad}f(p), Y⟩_p + \frac{1}{2} ⟨\mathcal{H}_p[Y], Y⟩_p\\
 \text{such that}& \ \lVert Y \rVert_p ≤ Δ
-\end{align*}\]

on a manifold $\mathcal M$ by using the Steihaug-Toint truncated conjugate-gradient (tCG) method. This can be done inplace of X.

For a description of the algorithm and theorems offering convergence guarantees, see [ABG06, CGT00].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$
  • X: a tangent vector at the point $p$ on the manifold $\mathcal M$

Instead of the three functions, you either provide a ManifoldHessianObjective mho which is then used to build the trust region model, or a TrustRegionModelObjective trmo directly.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • preconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.
  • θ=1.0: the superlinear convergence target rate of $1+θ$
  • κ=0.1: the linear convergence target rate.
  • project!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • randomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(manifold_dimension(base_manifold(Tpm)))|StopWhenResidualIsReducedByFactorOrPower(; κ=κ, θ=θ)|StopWhenTrustRegionIsExceeded()|StopWhenCurvatureIsNegative()|StopWhenModelIncreased(): a functor indicating that the stopping criterion is fulfilled
  • trust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

See also

trust_regions

source

State

Manopt.TruncatedConjugateGradientStateType
TruncatedConjugateGradientState <: AbstractHessianSolverState

describe the Steihaug-Toint truncated conjugate-gradient method, with

Fields

Let T denote the type of a tangent vector and R <: Real.

  • δ::T: the conjugate gradient search direction
  • δHδ, YPδ, δPδ, YPδ: temporary inner products with and preconditioned inner products.
  • , HY: temporary results of the Hessian applied to δ and Y, respectively.
  • κ::R: the linear convergence target rate.
  • project!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • randomize: indicate whether X is initialised to a random vector or not
  • residual::T: the gradient of the model $m(Y)$
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • θ::R: the superlinear convergence target rate of $1+θ$
  • trust_region_radius::R: the trust-region radius
  • X::T: the gradient $\operatorname{grad}f(p)$
  • Y::T: current iterate tangent vector
  • z::T: the preconditioned residual
  • z_r::R: inner product of the residual and z

Constructor

TruncatedConjugateGradientState(TpM::TangentSpace, Y=rand(TpM); kwargs...)

Initialise the TCG state.

Input

Keyword arguments

See also

truncated_conjugate_gradient_descent, trust_regions

source

Stopping criteria

Manopt.StopWhenResidualIsReducedByFactorOrPowerType
StopWhenResidualIsReducedByFactorOrPower <: StoppingCriterion

A functor for testing if the norm of residual at the current iterate is reduced either by a power of 1+θ or by a factor κ compared to the norm of the initial residual. The criterion hence reads

$\lVert r_k \rVert_{p} ≦ \lVert r_0 \rVert_{p^{(0)}} \min \bigl( κ, \lVert r_0 \rVert_{p^{(0)}} \bigr)$.

Fields

  • κ: the reduction factor
  • θ: part of the reduction power
  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;

Constructor

StopWhenResidualIsReducedByFactorOrPower(; κ=0.1, θ=1.0)

Initialize the StopWhenResidualIsReducedByFactorOrPower functor to indicate to stop after the norm of the current residual is lesser than either the norm of the initial residual to the power of 1+θ or the norm of the initial residual times κ.

See also

truncated_conjugate_gradient_descent, trust_regions

source
Manopt.StopWhenTrustRegionIsExceededType
StopWhenTrustRegionIsExceeded <: StoppingCriterion

A functor for testing if the norm of the next iterate in the Steihaug-Toint truncated conjugate gradient method is larger than the trust-region radius $θ ≤ \lVert Y^{(k)}^{*} \rVert_{p^{(k)}}$ and to end the algorithm when the trust region has been left.

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • trr the trust region radius
  • YPY the computed norm of $Y$.

Constructor

StopWhenTrustRegionIsExceeded()

initialize the StopWhenTrustRegionIsExceeded functor to indicate to stop after the norm of the next iterate is greater than the trust-region radius.

See also

truncated_conjugate_gradient_descent, trust_regions

source
Manopt.StopWhenCurvatureIsNegativeType
StopWhenCurvatureIsNegative <: StoppingCriterion

A functor for testing if the curvature of the model is negative, $⟨δ_k, \operatorname{Hess} F(p)[δ_k]⟩_p ≦ 0$. In this case, the model is not strictly convex, and the stepsize as computed does not yield a reduction of the model.

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • value store the value of the inner product.
  • reason: stores a reason of stopping if the stopping criterion has been reached, see get_reason.

Constructor

StopWhenCurvatureIsNegative()

See also

truncated_conjugate_gradient_descent, trust_regions

source
Manopt.StopWhenModelIncreasedType
StopWhenModelIncreased <: StoppingCriterion

A functor for testing if the curvature of the model value increased.

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • model_valuestre the last model value
  • inc_model_value store the model value that increased

Constructor

StopWhenModelIncreased()

See also

truncated_conjugate_gradient_descent, trust_regions

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualPower, v)

Update the residual Power θ to v.

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualFactor, v)

Update the residual Factor κ to v.

source

Trust region model

Manopt.TrustRegionModelObjectiveType
TrustRegionModelObjective{O<:AbstractManifoldHessianObjective} <: AbstractManifoldSubObjective{O}

A trust region model of the form

\[ m(X) = f(p) + ⟨\operatorname{grad} f(p), X⟩_p + \frac{1}(2} ⟨\operatorname{Hess} f(p)[X], X⟩_p\]

Fields

Constructors

TrustRegionModelObjective(objective)

with either an AbstractManifoldHessianObjective objective or an decorator containing such an objective

source

Technical details

The trust_regions solver requires the following functions of a manifold to be available

Literature

[ABG06]
P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).
[CGT00]
A. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).
+\end{align*}\]

on a manifold $\mathcal M$ by using the Steihaug-Toint truncated conjugate-gradient (tCG) method. This can be done inplace of X.

For a description of the algorithm and theorems offering convergence guarantees, see [ABG06, CGT00].

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$
  • X: a tangent vector at the point $p$ on the manifold $\mathcal M$

Instead of the three functions, you either provide a ManifoldHessianObjective mho which is then used to build the trust region model, or a TrustRegionModelObjective trmo directly.

Keyword arguments

  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • preconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.
  • θ=1.0: the superlinear convergence target rate of $1+θ$
  • κ=0.1: the linear convergence target rate.
  • project!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • randomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(manifold_dimension(base_manifold(Tpm)))|StopWhenResidualIsReducedByFactorOrPower(; κ=κ, θ=θ)|StopWhenTrustRegionIsExceeded()|StopWhenCurvatureIsNegative()|StopWhenModelIncreased(): a functor indicating that the stopping criterion is fulfilled
  • trust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

See also

trust_regions

source

State

Manopt.TruncatedConjugateGradientStateType
TruncatedConjugateGradientState <: AbstractHessianSolverState

describe the Steihaug-Toint truncated conjugate-gradient method, with

Fields

Let T denote the type of a tangent vector and R <: Real.

  • δ::T: the conjugate gradient search direction
  • δHδ, YPδ, δPδ, YPδ: temporary inner products with and preconditioned inner products.
  • , HY: temporary results of the Hessian applied to δ and Y, respectively.
  • κ::R: the linear convergence target rate.
  • project!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • randomize: indicate whether X is initialised to a random vector or not
  • residual::T: the gradient of the model $m(Y)$
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • θ::R: the superlinear convergence target rate of $1+θ$
  • trust_region_radius::R: the trust-region radius
  • X::T: the gradient $\operatorname{grad}f(p)$
  • Y::T: current iterate tangent vector
  • z::T: the preconditioned residual
  • z_r::R: inner product of the residual and z

Constructor

TruncatedConjugateGradientState(TpM::TangentSpace, Y=rand(TpM); kwargs...)

Initialise the TCG state.

Input

Keyword arguments

See also

truncated_conjugate_gradient_descent, trust_regions

source

Stopping criteria

Manopt.StopWhenResidualIsReducedByFactorOrPowerType
StopWhenResidualIsReducedByFactorOrPower <: StoppingCriterion

A functor for testing if the norm of residual at the current iterate is reduced either by a power of 1+θ or by a factor κ compared to the norm of the initial residual. The criterion hence reads

$\lVert r_k \rVert_{p} ≦ \lVert r_0 \rVert_{p^{(0)}} \min \bigl( κ, \lVert r_0 \rVert_{p^{(0)}} \bigr)$.

Fields

  • κ: the reduction factor
  • θ: part of the reduction power
  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;

Constructor

StopWhenResidualIsReducedByFactorOrPower(; κ=0.1, θ=1.0)

Initialize the StopWhenResidualIsReducedByFactorOrPower functor to indicate to stop after the norm of the current residual is lesser than either the norm of the initial residual to the power of 1+θ or the norm of the initial residual times κ.

See also

truncated_conjugate_gradient_descent, trust_regions

source
Manopt.StopWhenTrustRegionIsExceededType
StopWhenTrustRegionIsExceeded <: StoppingCriterion

A functor for testing if the norm of the next iterate in the Steihaug-Toint truncated conjugate gradient method is larger than the trust-region radius $θ ≤ \lVert Y^{(k)}^{*} \rVert_{p^{(k)}}$ and to end the algorithm when the trust region has been left.

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • trr the trust region radius
  • YPY the computed norm of $Y$.

Constructor

StopWhenTrustRegionIsExceeded()

initialize the StopWhenTrustRegionIsExceeded functor to indicate to stop after the norm of the next iterate is greater than the trust-region radius.

See also

truncated_conjugate_gradient_descent, trust_regions

source
Manopt.StopWhenCurvatureIsNegativeType
StopWhenCurvatureIsNegative <: StoppingCriterion

A functor for testing if the curvature of the model is negative, $⟨δ_k, \operatorname{Hess} F(p)[δ_k]⟩_p ≦ 0$. In this case, the model is not strictly convex, and the stepsize as computed does not yield a reduction of the model.

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • value store the value of the inner product.
  • reason: stores a reason of stopping if the stopping criterion has been reached, see get_reason.

Constructor

StopWhenCurvatureIsNegative()

See also

truncated_conjugate_gradient_descent, trust_regions

source
Manopt.StopWhenModelIncreasedType
StopWhenModelIncreased <: StoppingCriterion

A functor for testing if the curvature of the model value increased.

Fields

  • at_iteration::Int: an integer indicating at which the stopping criterion last indicted to stop, which might also be before the solver started (0). Any negative value indicates that this was not yet the case;
  • model_valuestre the last model value
  • inc_model_value store the model value that increased

Constructor

StopWhenModelIncreased()

See also

truncated_conjugate_gradient_descent, trust_regions

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualPower, v)

Update the residual Power θ to v.

source
Manopt.set_parameter!Method
set_parameter!(c::StopWhenResidualIsReducedByFactorOrPower, :ResidualFactor, v)

Update the residual Factor κ to v.

source

Trust region model

Manopt.TrustRegionModelObjectiveType
TrustRegionModelObjective{O<:AbstractManifoldHessianObjective} <: AbstractManifoldSubObjective{O}

A trust region model of the form

\[ m(X) = f(p) + ⟨\operatorname{grad} f(p), X⟩_p + \frac{1}(2} ⟨\operatorname{Hess} f(p)[X], X⟩_p\]

Fields

Constructors

TrustRegionModelObjective(objective)

with either an AbstractManifoldHessianObjective objective or an decorator containing such an objective

source

Technical details

The trust_regions solver requires the following functions of a manifold to be available

  • if you do not provide a trust_region_radius=, then injectivity_radius on the manifold M is required.
  • the norm as well, to stop when the norm of the gradient is small, but if you implemented inner, the norm is provided already.
  • A zero_vector!(M,X,p).
  • A `copyto!(M, q, p) and copy(M,p) for points.

Literature

[ABG06]
P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).
[CGT00]
A. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).
diff --git a/dev/solvers/trust_regions/index.html b/dev/solvers/trust_regions/index.html index b13243dc1a..f51770e6e7 100644 --- a/dev/solvers/trust_regions/index.html +++ b/dev/solvers/trust_regions/index.html @@ -6,12 +6,12 @@ \end{align*}\]

Here $Δ_k$ is a trust region radius, that is adapted every iteration, and $\mathcal H_k$ is some symmetric linear operator that approximates the Hessian $\operatorname{Hess} f$ of $f$.

Interface

Manopt.trust_regionsFunction
trust_regions(M, f, grad_f, Hess_f, p=rand(M); kwargs...)
 trust_regions(M, f, grad_f, p=rand(M); kwargs...)
 trust_regions!(M, f, grad_f, Hess_f, p; kwargs...)
-trust_regions!(M, f, grad_f, p; kwargs...)

run the Riemannian trust-regions solver for optimization on manifolds to minimize f, see on [ABG06, CGT00].

For the case that no Hessian is provided, the Hessian is computed using finite differences, see ApproxHessianFiniteDifference. For solving the inner trust-region subproblem of finding an update-vector, by default the truncated_conjugate_gradient_descent is used.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • acceptance_rate: accept/reject threshold: if ρ (the performance ratio for the iterate) is at least the acceptance rate ρ', the candidate is accepted. This value should be between $0$ and $rac{1}{4}$
  • augmentation_threshold=0.75: trust-region augmentation threshold: if ρ is larger than this threshold, a solution is on the trust region boundary and negative curvature, and the radius is extended (augmented)
  • augmentation_factor=2.0: trust-region augmentation factor
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • κ=0.1: the linear convergence target rate of the tCG method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein
  • max_trust_region_radius: the maximum trust-region radius
  • preconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.
  • project!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • randomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.
  • ρ_regularization=1e3: regularize the performance evaluation $ρ$ to avoid numerical inaccuracies.
  • reduction_factor=0.25: trust-region reduction factor
  • reduction_threshold=0.1: trust-region reduction threshold: if ρ is below this threshold, the trust region radius is reduced by reduction_factor.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_stopping_criterion=( see truncated_conjugate_gradient_descent): a functor indicating that the stopping criterion is fulfilled
  • sub_problem=DefaultManoptProblem(M,ConstrainedManifoldObjective(subcost, subgrad; evaluation=evaluation)): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used
  • θ=1.0: the superlinear convergence target rate of $1+θ$ of the tCG-method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein
  • trust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius

For the case that no Hessian is provided, the Hessian is computed using finite difference, see ApproxHessianFiniteDifference.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

See also

truncated_conjugate_gradient_descent

source
Manopt.trust_regions!Function
trust_regions(M, f, grad_f, Hess_f, p=rand(M); kwargs...)
+trust_regions!(M, f, grad_f, p; kwargs...)

run the Riemannian trust-regions solver for optimization on manifolds to minimize f, see on [ABG06, CGT00].

For the case that no Hessian is provided, the Hessian is computed using finite differences, see ApproxHessianFiniteDifference. For solving the inner trust-region subproblem of finding an update-vector, by default the truncated_conjugate_gradient_descent is used.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • acceptance_rate: accept/reject threshold: if ρ (the performance ratio for the iterate) is at least the acceptance rate ρ', the candidate is accepted. This value should be between $0$ and $rac{1}{4}$
  • augmentation_threshold=0.75: trust-region augmentation threshold: if ρ is larger than this threshold, a solution is on the trust region boundary and negative curvature, and the radius is extended (augmented)
  • augmentation_factor=2.0: trust-region augmentation factor
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • κ=0.1: the linear convergence target rate of the tCG method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein
  • max_trust_region_radius: the maximum trust-region radius
  • preconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.
  • project!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • randomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.
  • ρ_regularization=1e3: regularize the performance evaluation $ρ$ to avoid numerical inaccuracies.
  • reduction_factor=0.25: trust-region reduction factor
  • reduction_threshold=0.1: trust-region reduction threshold: if ρ is below this threshold, the trust region radius is reduced by reduction_factor.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_stopping_criterion=( see truncated_conjugate_gradient_descent): a functor indicating that the stopping criterion is fulfilled
  • sub_problem=DefaultManoptProblem(M,ConstrainedManifoldObjective(subcost, subgrad; evaluation=evaluation)): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used
  • θ=1.0: the superlinear convergence target rate of $1+θ$ of the tCG-method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein
  • trust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius

For the case that no Hessian is provided, the Hessian is computed using finite difference, see ApproxHessianFiniteDifference.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

See also

truncated_conjugate_gradient_descent

source
Manopt.trust_regions!Function
trust_regions(M, f, grad_f, Hess_f, p=rand(M); kwargs...)
 trust_regions(M, f, grad_f, p=rand(M); kwargs...)
 trust_regions!(M, f, grad_f, Hess_f, p; kwargs...)
-trust_regions!(M, f, grad_f, p; kwargs...)

run the Riemannian trust-regions solver for optimization on manifolds to minimize f, see on [ABG06, CGT00].

For the case that no Hessian is provided, the Hessian is computed using finite differences, see ApproxHessianFiniteDifference. For solving the inner trust-region subproblem of finding an update-vector, by default the truncated_conjugate_gradient_descent is used.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • acceptance_rate: accept/reject threshold: if ρ (the performance ratio for the iterate) is at least the acceptance rate ρ', the candidate is accepted. This value should be between $0$ and $rac{1}{4}$
  • augmentation_threshold=0.75: trust-region augmentation threshold: if ρ is larger than this threshold, a solution is on the trust region boundary and negative curvature, and the radius is extended (augmented)
  • augmentation_factor=2.0: trust-region augmentation factor
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • κ=0.1: the linear convergence target rate of the tCG method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein
  • max_trust_region_radius: the maximum trust-region radius
  • preconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.
  • project!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • randomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.
  • ρ_regularization=1e3: regularize the performance evaluation $ρ$ to avoid numerical inaccuracies.
  • reduction_factor=0.25: trust-region reduction factor
  • reduction_threshold=0.1: trust-region reduction threshold: if ρ is below this threshold, the trust region radius is reduced by reduction_factor.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_stopping_criterion=( see truncated_conjugate_gradient_descent): a functor indicating that the stopping criterion is fulfilled
  • sub_problem=DefaultManoptProblem(M,ConstrainedManifoldObjective(subcost, subgrad; evaluation=evaluation)): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used
  • θ=1.0: the superlinear convergence target rate of $1+θ$ of the tCG-method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein
  • trust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius

For the case that no Hessian is provided, the Hessian is computed using finite difference, see ApproxHessianFiniteDifference.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

See also

truncated_conjugate_gradient_descent

source

State

Manopt.TrustRegionsStateType
TrustRegionsState <: AbstractHessianSolverState

Store the state of the trust-regions solver.

Fields

  • acceptance_rate: a lower bound of the performance ratio for the iterate that decides if the iteration is accepted or not.
  • HX, HY, HZ: interim storage (to avoid allocation) of `\operatorname{Hess} f(p)[⋅] of X, Y, Z
  • max_trust_region_radius: the maximum trust-region radius
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • project!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • randomize: indicate whether X is initialised to a random vector or not
  • ρ_regularization: regularize the model fitness $ρ$ to avoid division by zero
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • σ: Gaussian standard deviation when creating the random initial tangent vector This field has no effect, when randomize is false.
  • trust_region_radius: the trust-region radius
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • Y: the solution (tangent vector) of the subsolver
  • Z: the Cauchy point (only used if random is activated)

Constructors

TrustRegionsState(M, mho::AbstractManifoldHessianObjective; kwargs...)
+trust_regions!(M, f, grad_f, p; kwargs...)

run the Riemannian trust-regions solver for optimization on manifolds to minimize f, see on [ABG06, CGT00].

For the case that no Hessian is provided, the Hessian is computed using finite differences, see ApproxHessianFiniteDifference. For solving the inner trust-region subproblem of finding an update-vector, by default the truncated_conjugate_gradient_descent is used.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • f: a cost function $f: \mathcal M→ ℝ$ implemented as (M, p) -> v
  • grad_f: the (Riemannian) gradient $\operatorname{grad}f$: \mathcal M → T_{p}\mathcal M of f as a function (M, p) -> X or a function (M, X, p) -> X computing X in-place
  • Hess_f: the (Riemannian) Hessian $\operatorname{Hess}f$: T{p}\mathcal M → T{p}\mathcal M of f as a function (M, p, X) -> Y or a function (M, Y, p, X) -> Y computing Y in-place
  • p: a point on the manifold $\mathcal M$

Keyword arguments

  • acceptance_rate: accept/reject threshold: if ρ (the performance ratio for the iterate) is at least the acceptance rate ρ', the candidate is accepted. This value should be between $0$ and $rac{1}{4}$
  • augmentation_threshold=0.75: trust-region augmentation threshold: if ρ is larger than this threshold, a solution is on the trust region boundary and negative curvature, and the radius is extended (augmented)
  • augmentation_factor=2.0: trust-region augmentation factor
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • κ=0.1: the linear convergence target rate of the tCG method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein
  • max_trust_region_radius: the maximum trust-region radius
  • preconditioner: a preconditioner for the Hessian H. This is either an allocating function (M, p, X) -> Y or an in-place function (M, Y, p, X) -> Y, see evaluation, and by default set to the identity.
  • project!=copyto!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • randomize=false: indicate whether X is initialised to a random vector or not. This disables preconditioning.
  • ρ_regularization=1e3: regularize the performance evaluation $ρ$ to avoid numerical inaccuracies.
  • reduction_factor=0.25: trust-region reduction factor
  • reduction_threshold=0.1: trust-region reduction threshold: if ρ is below this threshold, the trust region radius is reduced by reduction_factor.
  • retraction_method=default_retraction_method(M, typeof(p)): a retraction $\operatorname{retr}$ to use, see the section on retractions
  • stopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled
  • sub_kwargs=(;): a named tuple of keyword arguments that are passed to decorate_objective! of the sub solvers objective, the decorate_state! of the subsovlers state, and the sub state constructor itself.
  • sub_stopping_criterion=( see truncated_conjugate_gradient_descent): a functor indicating that the stopping criterion is fulfilled
  • sub_problem=DefaultManoptProblem(M,ConstrainedManifoldObjective(subcost, subgrad; evaluation=evaluation)): specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state=QuasiNewtonState: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function. where QuasiNewtonLimitedMemoryDirectionUpdate with InverseBFGS is used
  • θ=1.0: the superlinear convergence target rate of $1+θ$ of the tCG-method truncated_conjugate_gradient_descent, and is used in a stopping criterion therein
  • trust_region_radius=injectivity_radius(M) / 4: the initial trust-region radius

For the case that no Hessian is provided, the Hessian is computed using finite difference, see ApproxHessianFiniteDifference.

All other keyword arguments are passed to decorate_state! for state decorators or decorate_objective! for objective decorators, respectively.

Output

The obtained approximate minimizer $p^*$. To obtain the whole final state of the solver, see get_solver_return for details, especially the return_state= keyword.

See also

truncated_conjugate_gradient_descent

source

State

Manopt.TrustRegionsStateType
TrustRegionsState <: AbstractHessianSolverState

Store the state of the trust-regions solver.

Fields

  • acceptance_rate: a lower bound of the performance ratio for the iterate that decides if the iteration is accepted or not.
  • HX, HY, HZ: interim storage (to avoid allocation) of `\operatorname{Hess} f(p)[⋅] of X, Y, Z
  • max_trust_region_radius: the maximum trust-region radius
  • p::P: a point on the manifold $\mathcal M$storing the current iterate
  • project!: for numerical stability it is possible to project onto the tangent space after every iteration. the function has to work inplace of Y, that is (M, Y, p, X) -> Y, where X and Y can be the same memory.
  • stop::StoppingCriterion: a functor indicating that the stopping criterion is fulfilled
  • randomize: indicate whether X is initialised to a random vector or not
  • ρ_regularization: regularize the model fitness $ρ$ to avoid division by zero
  • sub_problem::Union{AbstractManoptProblem, F}: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state::Union{AbstractManoptProblem, F}: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.
  • σ: Gaussian standard deviation when creating the random initial tangent vector This field has no effect, when randomize is false.
  • trust_region_radius: the trust-region radius
  • X::T: a tangent vector at the point $p$ on the manifold $\mathcal M$
  • Y: the solution (tangent vector) of the subsolver
  • Z: the Cauchy point (only used if random is activated)

Constructors

TrustRegionsState(M, mho::AbstractManifoldHessianObjective; kwargs...)
 TrustRegionsState(M, sub_problem, sub_state; kwargs...)
-TrustRegionsState(M, sub_problem; evaluation=AllocatingEvaluation(), kwargs...)

create a trust region state.

  • given a AbstractManifoldHessianObjective mho, the default sub solver, a TruncatedConjugateGradientState with mho used to define the problem on a tangent space is created
  • given a sub_problem and an evaluation= keyword, the sub problem solver is assumed to be the closed form solution, where evaluation determines how to call the sub function.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • sub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Keyword arguments

  • acceptance_rate=0.1
  • max_trust_region_radius=sqrt(manifold_dimension(M))
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • project!=copyto!
  • stopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled
  • randomize=false
  • ρ_regularization=10000.0
  • θ=1.0
  • trust_region_radius=max_trust_region_radius / 8
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector

See also

trust_regions

source

Approximation of the Hessian

Several different methods to approximate the Hessian are available.

Manopt.ApproxHessianFiniteDifferenceType
ApproxHessianFiniteDifference{E, P, T, G, RTR, VTR, R <: Real} <: AbstractApproxHessian

A functor to approximate the Hessian by a finite difference of gradient evaluation.

Given a point p and a direction X and the gradient $\operatorname{grad} f(p)$ of a function $f$ the Hessian is approximated as follows: let $c$ be a stepsize, $X ∈ T_{p}\mathcal M$ a tangent vector and $q = \operatorname{retr}_p(\frac{c}{\lVert X \rVert_p}X)$ be a step in direction $X$ of length $c$ following a retraction Then the Hessian is approximated by the finite difference of the gradients, where $\mathcal T_{⋅←⋅}$ is a vector transport.

\[\operatorname{Hess}f(p)[X] ≈ +TrustRegionsState(M, sub_problem; evaluation=AllocatingEvaluation(), kwargs...)

create a trust region state.

  • given a AbstractManifoldHessianObjective mho, the default sub solver, a TruncatedConjugateGradientState with mho used to define the problem on a tangent space is created
  • given a sub_problem and an evaluation= keyword, the sub problem solver is assumed to be the closed form solution, where evaluation determines how to call the sub function.

Input

  • M::AbstractManifold: a Riemannian manifold $\mathcal M$
  • sub_problem: specify a problem for a solver or a closed form solution function, which can be allocating or in-place.
  • sub_state: a state to specify the sub solver to use. For a closed form solution, this indicates the type of function.

Keyword arguments

  • acceptance_rate=0.1
  • max_trust_region_radius=sqrt(manifold_dimension(M))
  • p=rand(M): a point on the manifold $\mathcal M$to specify the initial value
  • project!=copyto!
  • stopping_criterion=StopAfterIteration(1000)|StopWhenGradientNormLess(1e-6): a functor indicating that the stopping criterion is fulfilled
  • randomize=false
  • ρ_regularization=10000.0
  • θ=1.0
  • trust_region_radius=max_trust_region_radius / 8
  • X=zero_vector(M, p): a tangent vector at the point $p$ on the manifold $\mathcal M$to specify the representation of a tangent vector

See also

trust_regions

source

Approximation of the Hessian

Several different methods to approximate the Hessian are available.

Manopt.ApproxHessianFiniteDifferenceType
ApproxHessianFiniteDifference{E, P, T, G, RTR, VTR, R <: Real} <: AbstractApproxHessian

A functor to approximate the Hessian by a finite difference of gradient evaluation.

Given a point p and a direction X and the gradient $\operatorname{grad} f(p)$ of a function $f$ the Hessian is approximated as follows: let $c$ be a stepsize, $X ∈ T_{p}\mathcal M$ a tangent vector and $q = \operatorname{retr}_p(\frac{c}{\lVert X \rVert_p}X)$ be a step in direction $X$ of length $c$ following a retraction Then the Hessian is approximated by the finite difference of the gradients, where $\mathcal T_{⋅←⋅}$ is a vector transport.

\[\operatorname{Hess}f(p)[X] ≈ \frac{\lVert X \rVert_p}{c}\Bigl( \mathcal T_{p\gets q}\bigr(\operatorname{grad}f(q)\bigl) - \operatorname{grad}f(p) -\Bigl)\]

Fields

Internal temporary fields

  • grad_tmp: a temporary storage for the gradient at the current p
  • grad_dir_tmp: a temporary storage for the gradient at the current p_dir
  • p_dir::P: a temporary storage to the forward direction (or the $q$ in the formula)

Constructor

ApproximateFiniteDifference(M, p, grad_f; kwargs...)

Keyword arguments

source
Manopt.ApproxHessianSymmetricRankOneType
ApproxHessianSymmetricRankOne{E, P, G, T, B<:AbstractBasis{ℝ}, VTR, R<:Real} <: AbstractApproxHessian

A functor to approximate the Hessian by the symmetric rank one update.

Fields

  • gradient!!: the gradient function (either allocating or mutating, see evaluation parameter).
  • ν: a small real number to ensure that the denominator in the update does not become too small and thus the method does not break down.
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports.

Internal temporary fields

  • p_tmp: a temporary storage the current point p.
  • grad_tmp: a temporary storage for the gradient at the current p.
  • matrix: a temporary storage for the matrix representation of the approximating operator.
  • basis: a temporary storage for an orthonormal basis at the current p.

Constructor

ApproxHessianSymmetricRankOne(M, p, gradF; kwargs...)

Keyword arguments

  • initial_operator (Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M))) the matrix representation of the initial approximating operator.
  • basis (DefaultOrthonormalBasis()) an orthonormal basis in the tangent space of the initial iterate p.
  • nu (-1)
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
source
Manopt.ApproxHessianBFGSType
ApproxHessianBFGS{E, P, G, T, B<:AbstractBasis{ℝ}, VTR, R<:Real} <: AbstractApproxHessian

A functor to approximate the Hessian by the BFGS update.

Fields

  • gradient!! the gradient function (either allocating or mutating, see evaluation parameter).
  • scale
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Internal temporary fields

  • p_tmp a temporary storage the current point p.
  • grad_tmp a temporary storage for the gradient at the current p.
  • matrix a temporary storage for the matrix representation of the approximating operator.
  • basis a temporary storage for an orthonormal basis at the current p.

Constructor

ApproxHessianBFGS(M, p, gradF; kwargs...)

Keyword arguments

  • initial_operator (Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M))) the matrix representation of the initial approximating operator.
  • basis (DefaultOrthonormalBasis()) an orthonormal basis in the tangent space of the initial iterate p.
  • nu (-1)
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
source

as well as their (non-exported) common supertype

Technical details

The trust_regions solver requires the following functions of a manifold to be available

Literature

[ABG06]
P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).
[CGT00]
A. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).
+\Bigl)\]

Fields

Internal temporary fields

Constructor

ApproximateFiniteDifference(M, p, grad_f; kwargs...)

Keyword arguments

source
Manopt.ApproxHessianSymmetricRankOneType
ApproxHessianSymmetricRankOne{E, P, G, T, B<:AbstractBasis{ℝ}, VTR, R<:Real} <: AbstractApproxHessian

A functor to approximate the Hessian by the symmetric rank one update.

Fields

  • gradient!!: the gradient function (either allocating or mutating, see evaluation parameter).
  • ν: a small real number to ensure that the denominator in the update does not become too small and thus the method does not break down.
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports.

Internal temporary fields

  • p_tmp: a temporary storage the current point p.
  • grad_tmp: a temporary storage for the gradient at the current p.
  • matrix: a temporary storage for the matrix representation of the approximating operator.
  • basis: a temporary storage for an orthonormal basis at the current p.

Constructor

ApproxHessianSymmetricRankOne(M, p, gradF; kwargs...)

Keyword arguments

  • initial_operator (Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M))) the matrix representation of the initial approximating operator.
  • basis (DefaultOrthonormalBasis()) an orthonormal basis in the tangent space of the initial iterate p.
  • nu (-1)
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
source
Manopt.ApproxHessianBFGSType
ApproxHessianBFGS{E, P, G, T, B<:AbstractBasis{ℝ}, VTR, R<:Real} <: AbstractApproxHessian

A functor to approximate the Hessian by the BFGS update.

Fields

  • gradient!! the gradient function (either allocating or mutating, see evaluation parameter).
  • scale
  • vector_transport_method::AbstractVectorTransportMethodP: a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports

Internal temporary fields

  • p_tmp a temporary storage the current point p.
  • grad_tmp a temporary storage for the gradient at the current p.
  • matrix a temporary storage for the matrix representation of the approximating operator.
  • basis a temporary storage for an orthonormal basis at the current p.

Constructor

ApproxHessianBFGS(M, p, gradF; kwargs...)

Keyword arguments

  • initial_operator (Matrix{Float64}(I, manifold_dimension(M), manifold_dimension(M))) the matrix representation of the initial approximating operator.
  • basis (DefaultOrthonormalBasis()) an orthonormal basis in the tangent space of the initial iterate p.
  • nu (-1)
  • evaluation=AllocatingEvaluation(): specify whether the functions that return an array, for example a point or a tangent vector, work by allocating its result (AllocatingEvaluation) or whether they modify their input argument to return the result therein (InplaceEvaluation). Since usually the first argument is the manifold, the modified argument is the second.
  • vector_transport_method=default_vector_transport_method(M, typeof(p)): a vector transport $\mathcal T_{⋅←⋅}$ to use, see the section on vector transports
source

as well as their (non-exported) common supertype

Technical details

The trust_regions solver requires the following functions of a manifold to be available

Literature

[ABG06]
P.-A. Absil, C. Baker and K. Gallivan. Trust-Region Methods on Riemannian Manifolds. Foundations of Computational Mathematics 7, 303–330 (2006).
[CGT00]
A. R. Conn, N. I. Gould and P. L. Toint. Trust Region Methods (Society for Industrial and Applied Mathematics, 2000).
diff --git a/dev/tutorials/AutomaticDifferentiation/index.html b/dev/tutorials/AutomaticDifferentiation/index.html index 3c086ee85a..e497bceb14 100644 --- a/dev/tutorials/AutomaticDifferentiation/index.html +++ b/dev/tutorials/AutomaticDifferentiation/index.html @@ -56,4 +56,4 @@ [91a5bcdd] Plots v1.40.9 [731186ca] RecursiveArrayTools v3.27.4 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`
using Dates
-now()
2024-11-21T20:35:25.554
+now()
2024-11-21T20:36:03.876
diff --git a/dev/tutorials/ConstrainedOptimization/index.html b/dev/tutorials/ConstrainedOptimization/index.html index 5c567edfc7..f5de86dee9 100644 --- a/dev/tutorials/ConstrainedOptimization/index.html +++ b/dev/tutorials/ConstrainedOptimization/index.html @@ -29,7 +29,7 @@ # 60 f(x): -0.123557 | Δp : 2.40619e-05 The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-5). At iteration 68 the algorithm performed a step with a change (7.600544776224794e-11) less than 9.77237220955808e-6. - 6.139017 seconds (18.82 M allocations: 1.489 GiB, 5.76% gc time, 97.49% compilation time)

Now we have both a lower function value and the point is nearly within the constraints, namely up to numerical inaccuracies

f(M, v1)
-0.12353580883894738
maximum( g(M, v1) )
4.577229036010474e-12

A faster augmented Lagrangian run

Now this is a little slow, so we can modify two things:

  1. Gradients should be evaluated in place, so for example
grad_f!(M, X, p) = project!(M, X, p, -transpose(Z) * p - Z * p);
  1. The constraints are currently always evaluated all together, since the function grad_g always returns a vector of gradients. We first change the constraints function into a vector of functions. We further change the gradient both into a vector of gradient functions $\operatorname{grad} g_i,i=1,\ldots,d$, as well as gradients that are computed in place.
g2 = [(M, p) -> -p[i] for i in 1:d];
+  6.361862 seconds (18.72 M allocations: 1.484 GiB, 5.99% gc time, 97.60% compilation time)

Now we have both a lower function value and the point is nearly within the constraints, namely up to numerical inaccuracies

f(M, v1)
-0.12353580883894738
maximum( g(M, v1) )
4.577229036010474e-12

A faster augmented Lagrangian run

Now this is a little slow, so we can modify two things:

  1. Gradients should be evaluated in place, so for example
grad_f!(M, X, p) = project!(M, X, p, -transpose(Z) * p - Z * p);
  1. The constraints are currently always evaluated all together, since the function grad_g always returns a vector of gradients. We first change the constraints function into a vector of functions. We further change the gradient both into a vector of gradient functions $\operatorname{grad} g_i,i=1,\ldots,d$, as well as gradients that are computed in place.
g2 = [(M, p) -> -p[i] for i in 1:d];
 grad_g2! = [
     (M, X, p) -> project!(M, X, p, [i == j ? -1.0 : 0.0 for j in 1:d]) for i in 1:d
 ];

We obtain

@time v2 = augmented_Lagrangian_method(
@@ -44,7 +44,7 @@
 # 60    f(x): -0.123557 | Δp : 2.40619e-05
 The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-5).
 At iteration 68 the algorithm performed a step with a change (7.600544776224794e-11) less than 9.77237220955808e-6.
-  2.378452 seconds (7.40 M allocations: 748.106 MiB, 3.43% gc time, 94.95% compilation time)

As a technical remark: note that (by default) the change to InplaceEvaluations affects both the constrained solver as well as the inner solver of the subproblem in each iteration.

f(M, v2)
-0.12353580883894738
maximum(g(M, v2))
4.577229036010474e-12

These are the very similar to the previous values but the solver took much less time and less memory allocations.

Exact penalty method

As a second solver, we have the Exact Penalty Method, which currently is available with two smoothing variants, which make an inner solver for smooth optimization, that is by default again [quasi Newton] possible: LogarithmicSumOfExponentials and LinearQuadraticHuber. We compare both here as well. The first smoothing technique is the default, so we can just call

@time v3 = exact_penalty_method(
+  2.529631 seconds (7.30 M allocations: 743.027 MiB, 3.27% gc time, 95.00% compilation time)

As a technical remark: note that (by default) the change to InplaceEvaluations affects both the constrained solver as well as the inner solver of the subproblem in each iteration.

f(M, v2)
-0.12353580883894738
maximum(g(M, v2))
4.577229036010474e-12

These are the very similar to the previous values but the solver took much less time and less memory allocations.

Exact penalty method

As a second solver, we have the Exact Penalty Method, which currently is available with two smoothing variants, which make an inner solver for smooth optimization, that is by default again [quasi Newton] possible: LogarithmicSumOfExponentials and LinearQuadraticHuber. We compare both here as well. The first smoothing technique is the default, so we can just call

@time v3 = exact_penalty_method(
     M, f, grad_f!, p0; g=g2, grad_g=grad_g2!, evaluation=InplaceEvaluation(),
     debug=[:Iteration, :Cost, :Stop, " | ", :Change, 50, "\n"],
 );
Initial f(x): 0.005667 | 
@@ -52,7 +52,7 @@
 # 100   f(x): -0.123555 | Last Change: 0.013515
 The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).
 At iteration 102 the algorithm performed a step with a change (3.0244885037602495e-7) less than 1.0e-6.
-  2.743942 seconds (14.51 M allocations: 4.764 GiB, 8.96% gc time, 65.84% compilation time)

We obtain a similar cost value as for the Augmented Lagrangian Solver from before, but here the constraint is actually fulfilled and not just numerically “on the boundary”.

f(M, v3)
-0.12355544268449432
maximum(g(M, v3))
-3.589798060999793e-6

The second smoothing technique is often beneficial, when we have a lot of constraints (in the previously mentioned vectorial manner), since we can avoid several gradient evaluations for the constraint functions here. This leads to a faster iteration time.

@time v4 = exact_penalty_method(
+  2.873317 seconds (14.51 M allocations: 4.764 GiB, 9.05% gc time, 65.31% compilation time)

We obtain a similar cost value as for the Augmented Lagrangian Solver from before, but here the constraint is actually fulfilled and not just numerically “on the boundary”.

f(M, v3)
-0.12355544268449432
maximum(g(M, v3))
-3.589798060999793e-6

The second smoothing technique is often beneficial, when we have a lot of constraints (in the previously mentioned vectorial manner), since we can avoid several gradient evaluations for the constraint functions here. This leads to a faster iteration time.

@time v4 = exact_penalty_method(
     M, f, grad_f!, p0; g=g2, grad_g=grad_g2!,
     evaluation=InplaceEvaluation(),
     smoothing=LinearQuadraticHuber(),
@@ -62,9 +62,9 @@
 # 100   f(x): -0.123557 | Last Change: 0.000026
 The value of the variable (ϵ) is smaller than or equal to its threshold (1.0e-6).
 At iteration 101 the algorithm performed a step with a change (1.0069976577931588e-8) less than 1.0e-6.
-  2.161071 seconds (9.44 M allocations: 2.176 GiB, 6.59% gc time, 84.28% compilation time)

For the result we see the same behaviour as for the other smoothing.

f(M, v4)
-0.12355667846565418
maximum(g(M, v4))
2.6974802196316014e-8

Comparing to the unconstrained solver

We can compare this to the global optimum on the sphere, which is the unconstrained optimisation problem, where we can just use Quasi Newton.

Note that this is much faster, since every iteration of the algorithm does a quasi-Newton call as well.

@time w1 = quasi_Newton(
+  2.168971 seconds (9.44 M allocations: 2.176 GiB, 6.07% gc time, 83.55% compilation time)

For the result we see the same behaviour as for the other smoothing.

f(M, v4)
-0.12355667846565418
maximum(g(M, v4))
2.6974802196316014e-8

Comparing to the unconstrained solver

We can compare this to the global optimum on the sphere, which is the unconstrained optimisation problem, where we can just use Quasi Newton.

Note that this is much faster, since every iteration of the algorithm does a quasi-Newton call as well.

@time w1 = quasi_Newton(
     M, f, grad_f!, p0; evaluation=InplaceEvaluation()
-);
  0.740804 seconds (1.92 M allocations: 115.362 MiB, 2.26% gc time, 96.83% compilation time)
f(M, w1)
-0.13990874034056555

But for sure here the constraints here are not fulfilled and we have quite positive entries in $g(w_1)$

maximum(g(M, w1))
0.11803200739746737

Technical details

This tutorial is cached. It was last run on the following package versions.

using Pkg
+);
  0.743936 seconds (1.92 M allocations: 115.373 MiB, 2.10% gc time, 99.02% compilation time)
f(M, w1)
-0.13990874034056555

But for sure here the constraints here are not fulfilled and we have quite positive entries in $g(w_1)$

maximum(g(M, w1))
0.11803200739746737

Technical details

This tutorial is cached. It was last run on the following package versions.

using Pkg
 Pkg.status()
Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`
   [6e4b80f9] BenchmarkTools v1.5.0
 ⌅ [5ae59095] Colors v0.12.11
@@ -79,4 +79,4 @@
   [91a5bcdd] Plots v1.40.9
   [731186ca] RecursiveArrayTools v3.27.4
 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`
using Dates
-now()
2024-11-21T20:35:55.524

Literature

[BH19]
R. Bergmann and R. Herzog. Intrinsic formulation of KKT conditions and constraint qualifications on smooth manifolds. SIAM Journal on Optimization 29, 2423–2444 (2019), arXiv:1804.06214.
[LB19]
C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.
+now()
2024-11-21T20:36:34.360

Literature

[BH19]
R. Bergmann and R. Herzog. Intrinsic formulation of KKT conditions and constraint qualifications on smooth manifolds. SIAM Journal on Optimization 29, 2423–2444 (2019), arXiv:1804.06214.
[LB19]
C. Liu and N. Boumal. Simple algorithms for optimization on Riemannian manifolds with constraints. Applied Mathematics & Optimization (2019), arXiv:1091.10000.
diff --git a/dev/tutorials/CountAndCache/index.html b/dev/tutorials/CountAndCache/index.html index bb07e46c51..e9ed2e3bdc 100644 --- a/dev/tutorials/CountAndCache/index.html +++ b/dev/tutorials/CountAndCache/index.html @@ -97,7 +97,7 @@ count=[:Cost, :Gradient], cache=(:LRU, [:Cost, :Gradient], 25), return_objective=true, -)
  1.364739 seconds (2.40 M allocations: 121.896 MiB, 1.43% gc time, 99.66% compilation time)
+)
  1.343181 seconds (2.39 M allocations: 121.701 MiB, 1.51% gc time, 99.65% compilation time)
 
 ## Cache
   * :Cost     : 25/25 entries of type Float64 used
@@ -113,7 +113,7 @@
     count=[:Cost, :Gradient],
     cache=(:LRU, [:Cost, :Gradient], 25),
     return_objective=true,
-)
  0.789826 seconds (1.22 M allocations: 70.083 MiB, 99.07% compilation time)
+)
  0.790997 seconds (1.22 M allocations: 70.148 MiB, 2.43% gc time, 98.67% compilation time)
 
 ## Cache
   * :Cost     : 25/25 entries of type Float64 used
@@ -160,7 +160,7 @@
     count=[:Cost, :Gradient],
     cache=(:LRU, [:Cost, :Gradient], 2),
     return_objective=true#, return_state=true
-)
  0.604565 seconds (559.16 k allocations: 29.650 MiB, 2.85% gc time, 99.29% compilation time)
+)
  0.579056 seconds (559.15 k allocations: 29.645 MiB, 99.24% compilation time)
 
 ## Cache
   * :Cost     : 2/2 entries of type Float64 used
@@ -199,7 +199,7 @@
     count=[:Cost, :Gradient],
     cache=(:LRU, [:Cost, :Gradient], 25),
     return_objective=true,
-)
  0.504801 seconds (519.16 k allocations: 27.890 MiB, 98.94% compilation time)
+)
  0.518644 seconds (519.16 k allocations: 27.893 MiB, 3.48% gc time, 99.00% compilation time)
 
 ## Cache
   * :Cost     : 25/25 entries of type Float64 used
@@ -225,4 +225,4 @@
   [91a5bcdd] Plots v1.40.9
   [731186ca] RecursiveArrayTools v3.27.4
 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`
using Dates
-now()
2024-11-21T20:36:20.803
+now()
2024-11-21T20:36:59.676
diff --git a/dev/tutorials/EmbeddingObjectives/index.html b/dev/tutorials/EmbeddingObjectives/index.html index 9a10441470..3b45abb448 100644 --- a/dev/tutorials/EmbeddingObjectives/index.html +++ b/dev/tutorials/EmbeddingObjectives/index.html @@ -146,4 +146,4 @@ [91a5bcdd] Plots v1.40.9 [731186ca] RecursiveArrayTools v3.27.4 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`
using Dates
-now()
2024-11-21T20:37:01.720
+now()
2024-11-21T20:37:41.341
diff --git a/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-12-output-1.svg b/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-12-output-1.svg index adeee03a90..0c381f8d6d 100644 --- a/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-12-output-1.svg +++ b/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-12-output-1.svg @@ -1,38 +1,38 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + diff --git a/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-13-output-1.svg b/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-13-output-1.svg index 69263e11d3..2e8e6f34e2 100644 --- a/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-13-output-1.svg +++ b/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-13-output-1.svg @@ -1,44 +1,44 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-8-output-1.svg b/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-8-output-1.svg index bd0ae57373..235e1f18fd 100644 --- a/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-8-output-1.svg +++ b/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-8-output-1.svg @@ -1,38 +1,38 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + diff --git a/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-9-output-1.svg b/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-9-output-1.svg index bd34e23877..2b7f41d577 100644 --- a/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-9-output-1.svg +++ b/dev/tutorials/EmbeddingObjectives_files/figure-commonmark/cell-9-output-1.svg @@ -1,44 +1,44 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/tutorials/GeodesicRegression/index.html b/dev/tutorials/GeodesicRegression/index.html index 028a7df4c3..84d3f6bf46 100644 --- a/dev/tutorials/GeodesicRegression/index.html +++ b/dev/tutorials/GeodesicRegression/index.html @@ -281,4 +281,4 @@ init_geo3 = geodesic(S, x1[M, :point], x1[M, :vector], dense_t) geo_pts3 = geodesic(S, y3[N, 1][M, :point], y3[N, 1][M, :vector], y3[N, 2]) t3 = y3[N, 2] -geo_conns = shortest_geodesic.(Ref(S), data2, geo_pts3, Ref(0.5 .+ 4*dense_t));

which yields

The third result

Note that the geodesics from the data to the regression geodesic meet at a nearly orthogonal angle.

Acknowledgement. Parts of this tutorial are based on the bachelor thesis of Jeremias Arf.

Literature

[BG18]
R. Bergmann and P.-Y. Gousenbourger. A variational model for data fitting on manifolds by minimizing the acceleration of a Bézier curve. Frontiers in Applied Mathematics and Statistics 4 (2018), arXiv:1807.10090.
[Fle13]
P. T. Fletcher. Geodesic regression and the theory of least squares on Riemannian manifolds. International Journal of Computer Vision 105, 171–185 (2013).
+geo_conns = shortest_geodesic.(Ref(S), data2, geo_pts3, Ref(0.5 .+ 4*dense_t));

which yields

The third result

Note that the geodesics from the data to the regression geodesic meet at a nearly orthogonal angle.

Acknowledgement. Parts of this tutorial are based on the bachelor thesis of Jeremias Arf.

Literature

[BG18]
R. Bergmann and P.-Y. Gousenbourger. A variational model for data fitting on manifolds by minimizing the acceleration of a Bézier curve. Frontiers in Applied Mathematics and Statistics 4 (2018), arXiv:1807.10090.
[Fle13]
P. T. Fletcher. Geodesic regression and the theory of least squares on Riemannian manifolds. International Journal of Computer Vision 105, 171–185 (2013).
diff --git a/dev/tutorials/HowToDebug/index.html b/dev/tutorials/HowToDebug/index.html index 9aef5a6b6e..4dc4ae6706 100644 --- a/dev/tutorials/HowToDebug/index.html +++ b/dev/tutorials/HowToDebug/index.html @@ -131,4 +131,4 @@ [91a5bcdd] Plots v1.40.9 [731186ca] RecursiveArrayTools v3.27.4 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`
using Dates
-now()
2024-11-21T20:37:25.498
+now()
2024-11-21T20:38:05.714
diff --git a/dev/tutorials/HowToRecord/index.html b/dev/tutorials/HowToRecord/index.html index f7eb8edcaf..3ea7468126 100644 --- a/dev/tutorials/HowToRecord/index.html +++ b/dev/tutorials/HowToRecord/index.html @@ -455,4 +455,4 @@ [91a5bcdd] Plots v1.40.9 [731186ca] RecursiveArrayTools v3.27.4 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`
using Dates
-now()
2024-11-21T20:37:59.901
+now()
2024-11-21T20:38:39.559
diff --git a/dev/tutorials/ImplementASolver/index.html b/dev/tutorials/ImplementASolver/index.html index 7905e9d9bb..53dbe726cc 100644 --- a/dev/tutorials/ImplementASolver/index.html +++ b/dev/tutorials/ImplementASolver/index.html @@ -112,4 +112,4 @@ [91a5bcdd] Plots v1.40.9 [731186ca] RecursiveArrayTools v3.27.4 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`
using Dates
-now()
2024-11-21T20:38:16.611
+now()
2024-11-21T20:38:57.087
diff --git a/dev/tutorials/ImplementOwnManifold/index.html b/dev/tutorials/ImplementOwnManifold/index.html index 7f309cf1aa..782612ef67 100644 --- a/dev/tutorials/ImplementOwnManifold/index.html +++ b/dev/tutorials/ImplementOwnManifold/index.html @@ -78,4 +78,4 @@ [91a5bcdd] Plots v1.40.9 [731186ca] RecursiveArrayTools v3.27.4 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`
using Dates
-now()
2024-11-21T20:38:39.906

Literature

[Kar77]
H. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).
+now()
2024-11-21T20:39:21.777

Literature

[Kar77]
H. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).
diff --git a/dev/tutorials/InplaceGradient/index.html b/dev/tutorials/InplaceGradient/index.html index dca408363e..ed1393a09c 100644 --- a/dev/tutorials/InplaceGradient/index.html +++ b/dev/tutorials/InplaceGradient/index.html @@ -60,4 +60,4 @@ [3362f125] ManifoldsBase v0.15.10 [0fc0a36d] Manopt v0.4.63 `..` [91a5bcdd] Plots v1.40.4
using Dates
-now()
2024-05-26T13:52:05.613
+now()
2024-05-26T13:52:05.613
diff --git a/dev/tutorials/Optimize/index.html b/dev/tutorials/Optimize/index.html index 7f57bec312..7867ee00b4 100644 --- a/dev/tutorials/Optimize/index.html +++ b/dev/tutorials/Optimize/index.html @@ -100,4 +100,4 @@ [91a5bcdd] Plots v1.40.9 [731186ca] RecursiveArrayTools v3.27.4 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`
using Dates
-now()
2024-11-21T20:39:21.794

Literature

[AMS08]
P.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.
[Bac14]
M. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.
[Bou23]
[Car92]
M. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.
[Kar77]
H. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).
+now()
2024-11-21T20:40:06.134

Literature

[AMS08]
P.-A. Absil, R. Mahony and R. Sepulchre. Optimization Algorithms on Matrix Manifolds (Princeton University Press, 2008), available online at press.princeton.edu/chapters/absil/.
[Bac14]
M. Bačák. Computing medians and means in Hadamard spaces. SIAM Journal on Optimization 24, 1542–1566 (2014), arXiv:1210.2145.
[Bou23]
[Car92]
M. P. do Carmo. Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Inc., Boston, MA, 1992); p. xiv+300.
[Kar77]
H. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics 30, 509–541 (1977).
diff --git a/dev/tutorials/Optimize_files/figure-commonmark/cell-23-output-1.svg b/dev/tutorials/Optimize_files/figure-commonmark/cell-23-output-1.svg index 3bd71eb8d2..7a67816375 100644 --- a/dev/tutorials/Optimize_files/figure-commonmark/cell-23-output-1.svg +++ b/dev/tutorials/Optimize_files/figure-commonmark/cell-23-output-1.svg @@ -1,40 +1,40 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + diff --git a/dev/tutorials/StochasticGradientDescent/index.html b/dev/tutorials/StochasticGradientDescent/index.html index cc1502c36b..558444c868 100644 --- a/dev/tutorials/StochasticGradientDescent/index.html +++ b/dev/tutorials/StochasticGradientDescent/index.html @@ -16,62 +16,62 @@ -0.4124602512237471 0.7450900936719854 0.38494647999455556
@benchmark stochastic_gradient_descent($M, $gradF, $p0)
BenchmarkTools.Trial: 1 sample with 1 evaluation.
- Single result which took 6.465 s (7.85% GC) to evaluate,
+ Single result which took 6.745 s (9.20% GC) to evaluate,
  with a memory estimate of 7.83 GiB, over 200213003 allocations.
p_opt2 = stochastic_gradient_descent(M, gradf, p0)
3-element Vector{Float64}:
  0.6828818855405705
  0.17545293717581142
- 0.7091463863243863
@benchmark stochastic_gradient_descent($M, $gradf, $p0)
BenchmarkTools.Trial: 2571 samples with 1 evaluation.
- Range (min … max):  615.879 μs … 14.639 ms  ┊ GC (min … max): 0.00% … 69.36%
- Time  (median):       1.605 ms              ┊ GC (median):    0.00%
- Time  (mean ± σ):     1.943 ms ±  1.134 ms  ┊ GC (mean ± σ):  6.08% ± 11.80%
+ 0.7091463863243863
@benchmark stochastic_gradient_descent($M, $gradf, $p0)
BenchmarkTools.Trial: 2418 samples with 1 evaluation.
+ Range (min … max):  645.651 μs … 13.692 ms  ┊ GC (min … max): 0.00% … 83.74%
+ Time  (median):       1.673 ms              ┊ GC (median):    0.00%
+ Time  (mean ± σ):     2.064 ms ±  1.297 ms  ┊ GC (mean ± σ):  7.64% ± 12.73%
 
-   ▁                               █                            
-  ███▇██▆█▆▇▆▅▆▄▅▅▃▄▄▄▃▃▃▃▃▂▄▂▂▃▃▂▃█▅▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ ▂
-  616 μs          Histogram: frequency by time         5.44 ms <
+  ▄▆▅▆▄▂▅▂▁▁                 █                                  
+  ███████████▇█▅▆▆▆▅▆▄▄▄▅▄▃▃▇█▆▃▂▂▁▁▁▁▂▁▁▂▁▁▂▂▁▁▁▁▂▁▁▁▁▁▁▂▁▁▁▂ ▃
+  646 μs          Histogram: frequency by time         6.66 ms <
 
  Memory estimate: 861.16 KiB, allocs estimate: 20050.

This result is reasonably close. But we can improve it by using a DirectionUpdateRule, namely:

On the one hand MomentumGradient, which requires both the manifold and the initial value, to keep track of the iterate and parallel transport the last direction to the current iterate. The necessary vector_transport_method keyword is set to a suitable default on every manifold, see default_vector_transport_method. We get ““”

p_opt3 = stochastic_gradient_descent(
     M, gradf, p0; direction=MomentumGradient(; direction=StochasticGradient())
 )
3-element Vector{Float64}:
-  0.375215361477979
- -0.026495079681491125
-  0.9265589259532395
MG = MomentumGradient(; direction=StochasticGradient());
-@benchmark stochastic_gradient_descent($M, $gradf, p=$p0; direction=$MG)
BenchmarkTools.Trial: 833 samples with 1 evaluation.
- Range (min … max):  5.293 ms … 17.501 ms  ┊ GC (min … max): 0.00% … 49.91%
- Time  (median):     5.421 ms              ┊ GC (median):    0.00%
- Time  (mean ± σ):   6.001 ms ±  1.234 ms  ┊ GC (mean ± σ):  8.16% ± 12.26%
+  0.46671468324066123
+ -0.3797901161381924
+  0.7987095042199683
MG = MomentumGradient(; direction=StochasticGradient());
+@benchmark stochastic_gradient_descent($M, $gradf, p=$p0; direction=$MG)
BenchmarkTools.Trial: 758 samples with 1 evaluation.
+ Range (min … max):  5.351 ms … 19.265 ms  ┊ GC (min … max): 0.00% … 49.66%
+ Time  (median):     5.819 ms              ┊ GC (median):    0.00%
+ Time  (mean ± σ):   6.587 ms ±  1.647 ms  ┊ GC (mean ± σ):  9.89% ± 14.09%
 
-  ▆█▆▂                           ▂▃▂▁                         
-  ████▆▁▄▄▁▆▁▄▄▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁██████▆█▁▄▅▁▅▁▁▄▄▅▇▅▁▄▁▄▁▄▁▅ ▇
-  5.29 ms      Histogram: log(frequency) by time     9.56 ms <
+  ▇█▇▇▅▄▄▃▂▁▂▂▁▁                    ▁ ▁▁▂  ▁                  
+  ███████████████▆▅█▇▃▄▃▅▃▃▃▄▄▄▄▆▅▆▆█████▆▆█▃▆▇█▇▅▇▄▅▇▄▅▅▄▃▄ █
+  5.35 ms      Histogram: log(frequency) by time     10.8 ms <
 
  Memory estimate: 7.71 MiB, allocs estimate: 200052.

And on the other hand the AverageGradient computes an average of the last n gradients. This is done by

p_opt4 = stochastic_gradient_descent(
     M, gradf, p0; direction=AverageGradient(; n=10, direction=StochasticGradient()), debug=[],
 )
3-element Vector{Float64}:
- -0.5636278115277376
-  0.646536380066075
- -0.5141151615382582
AG = AverageGradient(; n=10, direction=StochasticGradient(M));
-@benchmark stochastic_gradient_descent($M, $gradf, p=$p0; direction=$AG, debug=[])
BenchmarkTools.Trial: 238 samples with 1 evaluation.
- Range (min … max):  18.884 ms … 40.784 ms  ┊ GC (min … max): 0.00% … 27.49%
- Time  (median):     19.774 ms              ┊ GC (median):    0.00%
- Time  (mean ± σ):   21.016 ms ±  2.719 ms  ┊ GC (mean ± σ):  7.33% ±  7.23%
+ 0.5834888085913609
+ 0.7756423891832663
+ 0.2406651082951343
AG = AverageGradient(; n=10, direction=StochasticGradient(M));
+@benchmark stochastic_gradient_descent($M, $gradf, p=$p0; direction=$AG, debug=[])
BenchmarkTools.Trial: 205 samples with 1 evaluation.
+ Range (min … max):  20.092 ms … 44.055 ms  ┊ GC (min … max): 0.00% … 38.10%
+ Time  (median):     23.228 ms              ┊ GC (median):    0.00%
+ Time  (mean ± σ):   24.400 ms ±  3.185 ms  ┊ GC (mean ± σ):  8.50% ±  8.32%
 
-  █▇         ▄▇▃ ▂                                             
-  ███▆▄▁▁▁▁▁▁█████▁▁▁▁▁▄▁▄▄▄▁▄▁▁▁▁▄▁▁▁▁▁▄▄▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▄ ▆
-  18.9 ms      Histogram: log(frequency) by time      34.3 ms <
+        ▂▆█              ▁▁                                    
+  ▃▃▁▃▄▅███▆▃▂▂▁▁▁▁▁▂▃▄▅████▆▅▃▂▂▃▁▂▁▁▂▁▁▁▂▁▁▁▁▂▁▂▁▁▁▁▁▁▂▁▁▁▂ ▃
+  20.1 ms         Histogram: frequency by time        34.9 ms <
 
  Memory estimate: 21.90 MiB, allocs estimate: 600077.

Note that the default StoppingCriterion is a fixed number of iterations which helps the comparison here.

For both update rules we have to internally specify that we are still in the stochastic setting, since both rules can also be used with the IdentityUpdateRule within gradient_descent.

For this not-that-large-scale example we can of course also use a gradient descent with ArmijoLinesearch,

fullGradF(M, p) = 1/n*sum(grad_distance(M, q, p) for q in data)
 p_opt5 = gradient_descent(M, F, fullGradF, p0; stepsize=ArmijoLinesearch())
3-element Vector{Float64}:
   0.7050420977039097
  -0.006374163035874202
   0.7091368066253959

but in general it is expected to be a bit slow.

AL = ArmijoLinesearch();
-@benchmark gradient_descent($M, $F, $fullGradF, $p0; stepsize=$AL)
BenchmarkTools.Trial: 25 samples with 1 evaluation.
- Range (min … max):  202.667 ms … 223.306 ms  ┊ GC (min … max): 6.49% … 4.71%
- Time  (median):     205.968 ms               ┊ GC (median):    7.59%
- Time  (mean ± σ):   207.513 ms ±   4.955 ms  ┊ GC (mean ± σ):  7.56% ± 0.91%
+@benchmark gradient_descent($M, $F, $fullGradF, $p0; stepsize=$AL)
BenchmarkTools.Trial: 23 samples with 1 evaluation.
+ Range (min … max):  215.369 ms … 243.399 ms  ┊ GC (min … max): 8.75% … 4.88%
+ Time  (median):     219.790 ms               ┊ GC (median):    9.23%
+ Time  (mean ± σ):   221.107 ms ±   6.691 ms  ┊ GC (mean ± σ):  9.09% ± 1.34%
 
-  █▁▁▁▁▁ ████ █▁  ▁      ▁ ▁▁ ▁                   ▁           ▁  
-  ██████▁████▁██▁▁█▁▁▁▁▁▁█▁██▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁█ ▁
-  203 ms           Histogram: frequency by time          223 ms <
+  █  █  ▃  ▃        ▃                                            
+  █▁▇█▇▁█▁▇█▇▇▇▁▇▁▇▁█▇▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▇▁▁▁▁▁▁▁▁▁▁▁▁▇ ▁
+  215 ms           Histogram: frequency by time          243 ms <
 
  Memory estimate: 230.56 MiB, allocs estimate: 6338502.

Technical details

This tutorial is cached. It was last run on the following package versions.

using Pkg
 Pkg.status()
Status `~/work/Manopt.jl/Manopt.jl/tutorials/Project.toml`
@@ -88,4 +88,4 @@
   [91a5bcdd] Plots v1.40.9
   [731186ca] RecursiveArrayTools v3.27.4
 Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`
using Dates
-now()
2024-11-21T20:40:54.968
+now()
2024-11-21T20:41:43.615