Skip to content
This repository has been archived by the owner on Dec 20, 2024. It is now read-only.

Commit

Permalink
DOC: Do not introduce unnecessary line breaks within paragraph
Browse files Browse the repository at this point in the history
Do not introduce unnecessary line breaks within paragraph.

Take advantage of the commit to use literal highlighting syntax for
the `alpha` parameter for the sake of consistency.
  • Loading branch information
jhlegarreta committed Dec 8, 2024
1 parent bb212ab commit 34d1ff0
Showing 1 changed file with 9 additions and 11 deletions.
20 changes: 9 additions & 11 deletions src/eddymotion/model/gpr.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,10 +89,9 @@ class EddyMotionGPR(GaussianProcessRegressor):
"traditional" regression.
Finally, the parameter :math:`\sigma^2` maps on to Scikit-learn's ``alpha``
of the regressor.
Because it is not a parameter of the kernel, hyperparameter selection
through gradient-descent with analytical gradient calculations
would not work (the derivative of the kernel w.r.t. alpha is zero).
of the regressor. Because it is not a parameter of the kernel, hyperparameter
selection through gradient-descent with analytical gradient calculations
would not work (the derivative of the kernel w.r.t. ``alpha`` is zero).
This might have been overlooked in [Andersson15]_, or else they actually did
not use analytical gradient-descent:
Expand All @@ -105,13 +104,12 @@ class EddyMotionGPR(GaussianProcessRegressor):
The reason for that is that such methods typically use fewer steps, and
when the cost of calculating the derivatives is small/moderate compared
to calculating the functions itself (as is the case for Eq. (12)) then
execution time can be much shorter.
However, we found that for the multi-shell case a heuristic optimisation
method such as the Nelder-Mead simplex method (Nelder and Mead, 1965) was
frequently better at avoiding local maxima.
Hence, that was the method we used for all optimisations in the present
paper.
execution time can be much shorter. However, we found that for the
multi-shell case a heuristic optimisation method such as the Nelder-Mead
simplex method (Nelder and Mead, 1965) was frequently better at avoiding
local maxima. Hence, that was the method we used for all optimisations
in the present paper.
**Multi-shell regression (TODO).**
For multi-shell modeling, the kernel :math:`k(\textbf{x}, \textbf{x'})`
is updated following Eq. (14) in [Andersson15]_.
Expand Down

0 comments on commit 34d1ff0

Please sign in to comment.