Skip to content
This repository has been archived by the owner on Dec 20, 2024. It is now read-only.

Commit

Permalink
Merge branch 'main' into enh/bold-registration-experiment
Browse files Browse the repository at this point in the history
  • Loading branch information
oesteban authored Dec 19, 2024
2 parents 73337c0 + 3cba47d commit efc305e
Show file tree
Hide file tree
Showing 6 changed files with 541 additions and 26 deletions.
4 changes: 4 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,13 @@ More recently, Cieslak et al. [#r3]_ integrated both approaches in *SHORELine*,
the work of ``eddy`` and *SHORELine*, while generalizing these methods to multiple acquisition schemes
(single-shell, multi-shell, and diffusion spectrum imaging) using diffusion models available with DIPY [#r5]_.

.. BEGIN FLOWCHART
.. image:: https://raw.githubusercontent.com/nipreps/eddymotion/507fc9bab86696d5330fd6a86c3870968243aea8/docs/_static/eddymotion-flowchart.svg
:alt: The eddymotion flowchart

.. END FLOWCHART
.. [#r1] S. Ben-Amitay et al., Motion correction and registration of high b-value diffusion weighted images, Magnetic
Resonance in Medicine 67:1694–1702 (2012)
.. [#r2] J. L. R. Andersson. et al., An integrated approach to correction for off-resonance effects and subject movement
Expand Down
30 changes: 30 additions & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,14 @@
apidoc_separate_modules = True
apidoc_extra_args = ["--module-first", "-d 1", "-T"]


# -- Options for autodoc extension -------------------------------------------
autodoc_default_options = {
"special-members": "__call__, __len__",
}
autoclass_content = "both"


# -- Options for intersphinx extension ---------------------------------------

# Example configuration for intersphinx: refer to the Python standard library.
Expand All @@ -253,3 +261,25 @@

# -- Options for versioning extension ----------------------------------------
scv_show_banner = True


# -- Special functions -------------------------------------------------------
import inspect


def autodoc_process_signature(app, what, name, obj, options, signature, return_annotation):
"""Replace the class signature by the signature from cls.__init__"""

if what == "class" and hasattr(obj, "__init__"):
try:
init_signature = inspect.signature(obj.__init__)
# Convert the Signature object to a string
return str(init_signature), return_annotation
except ValueError:
# Handle cases where `inspect.signature` fails
return signature, return_annotation
return signature, return_annotation


def setup(app):
app.connect("autodoc-process-signature", autodoc_process_signature)
5 changes: 2 additions & 3 deletions docs/index.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
.. include:: links.rst
.. include:: ../README.rst
:end-line: 29
:end-before: BEGIN FLOWCHART
.. include:: ../README.rst
:start-line: 34

:start-after: END FLOWCHART

.. image:: _static/eddymotion-flowchart.svg
:alt: The eddymotion flowchart
Expand Down
2 changes: 1 addition & 1 deletion docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Installation
============
Make sure all of *eddymotion*' `External Dependencies`_ are installed.

On a functional Python 3.7 (or above) environment with ``pip`` installed,
On a functional Python 3.10 (or above) environment with ``pip`` installed,
*eddymotion* can be installed using the habitual command ::

$ python -m pip install eddymotion
Expand Down
485 changes: 485 additions & 0 deletions docs/notebooks/dwi_gp_estimation.ipynb

Large diffs are not rendered by default.

41 changes: 19 additions & 22 deletions src/eddymotion/model/gpr.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@

class EddyMotionGPR(GaussianProcessRegressor):
r"""
A GP regressor specialized for eddymotion.
A Gaussian process (GP) regressor specialized for eddymotion.
This specialization of the default GP regressor is created to allow
the following extended behaviors:
Expand All @@ -80,22 +80,21 @@ class EddyMotionGPR(GaussianProcessRegressor):
In principle, Scikit-Learn's implementation normalizes the training data
as in [Andersson15]_ (see
`FSL's souce code <https://git.fmrib.ox.ac.uk/fsl/eddy/-/blob/2480dda293d4cec83014454db3a193b87921f6b0/DiffusionGP.cpp#L218>`__).
`FSL's source code <https://git.fmrib.ox.ac.uk/fsl/eddy/-/blob/2480dda293d4cec83014454db3a193b87921f6b0/DiffusionGP.cpp#L218>`__).
From their paper (p. 167, end of first column):
Typically one just substracts the mean (:math:`\bar{\mathbf{f}}`)
Typically one just subtracts the mean (:math:`\bar{\mathbf{f}}`)
from :math:`\mathbf{f}` and then add it back to
:math:`f^{*}`, which is analogous to what is often done in
"traditional" regression.
Finally, the parameter :math:`\sigma^2` maps on to Scikit-learn's ``alpha``
of the regressor.
Because it is not a parameter of the kernel, hyperparameter selection
through gradient-descent with analytical gradient calculations
would not work (the derivative of the kernel w.r.t. alpha is zero).
of the regressor. Because it is not a parameter of the kernel, hyperparameter
selection through gradient-descent with analytical gradient calculations
would not work (the derivative of the kernel w.r.t. ``alpha`` is zero).
I believe this is overlooked in [Andersson15]_, or they actually did not
use analytical gradient-descent:
This might have been overlooked in [Andersson15]_, or else they actually did
not use analytical gradient-descent:
*A note on optimisation*
Expand Down Expand Up @@ -266,7 +265,6 @@ def __init__(
l_bounds: tuple[float, float] = BOUNDS_LAMBDA,
):
r"""
Initialize an exponential Kriging kernel.
Parameters
----------
Expand All @@ -275,7 +273,7 @@ def __init__(
beta_l : :obj:`float`, optional
The :math:`\lambda` hyperparameter.
a_bounds : :obj:`tuple`, optional
Bounds for the a parameter.
Bounds for the ``a`` parameter.
l_bounds : :obj:`tuple`, optional
Bounds for the :math:`\lambda` hyperparameter.
Expand All @@ -290,7 +288,7 @@ def hyperparameter_a(self) -> Hyperparameter:
return Hyperparameter("beta_a", "numeric", self.a_bounds)

@property
def hyperparameter_beta_l(self) -> Hyperparameter:
def hyperparameter_l(self) -> Hyperparameter:
return Hyperparameter("beta_l", "numeric", self.l_bounds)

def __call__(
Expand All @@ -312,10 +310,10 @@ def __call__(
Returns
-------
K : ndarray of shape (n_samples_X, n_samples_Y)
K : :obj:`~numpy.ndarray` of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradient : ndarray of shape (n_samples_X, n_samples_X, n_dims),\
K_gradient : :obj:`~numpy.ndarray` of shape (n_samples_X, n_samples_X, n_dims),\
optional
The gradient of the kernel k(X, X) with respect to the log of the
hyperparameter of the kernel. Only returned when `eval_gradient`
Expand Down Expand Up @@ -343,12 +341,12 @@ def diag(self, X: np.ndarray) -> np.ndarray:
Parameters
----------
X : ndarray of shape (n_samples_X, n_features)
X : :obj:`~numpy.ndarray` of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y)
Returns
-------
K_diag : ndarray of shape (n_samples_X,)
K_diag : :obj:`~numpy.ndarray` of shape (n_samples_X,)
Diagonal of kernel k(X, X)
"""
return self.beta_l * np.ones(X.shape[0])
Expand All @@ -372,7 +370,6 @@ def __init__(
l_bounds: tuple[float, float] = BOUNDS_LAMBDA,
):
r"""
Initialize a spherical Kriging kernel.
Parameters
----------
Expand All @@ -396,7 +393,7 @@ def hyperparameter_a(self) -> Hyperparameter:
return Hyperparameter("beta_a", "numeric", self.a_bounds)

@property
def hyperparameter_beta_l(self) -> Hyperparameter:
def hyperparameter_l(self) -> Hyperparameter:
return Hyperparameter("beta_l", "numeric", self.l_bounds)

def __call__(
Expand All @@ -418,10 +415,10 @@ def __call__(
Returns
-------
K : ndarray of shape (n_samples_X, n_samples_Y)
K : :obj:`~numpy.ndarray` of shape (n_samples_X, n_samples_Y)
Kernel k(X, Y)
K_gradient : ndarray of shape (n_samples_X, n_samples_X, n_dims),\
K_gradient : :obj:`~numpy.ndarray` of shape (n_samples_X, n_samples_X, n_dims),\
optional
The gradient of the kernel k(X, X) with respect to the log of the
hyperparameter of the kernel. Only returned when ``eval_gradient``
Expand Down Expand Up @@ -454,12 +451,12 @@ def diag(self, X: np.ndarray) -> np.ndarray:
Parameters
----------
X : ndarray of shape (n_samples_X, n_features)
X : :obj:`~numpy.ndarray` of shape (n_samples_X, n_features)
Left argument of the returned kernel k(X, Y)
Returns
-------
K_diag : ndarray of shape (n_samples_X,)
K_diag : :obj:`~numpy.ndarray` of shape (n_samples_X,)
Diagonal of kernel k(X, X)
"""
return self.beta_l * np.ones(X.shape[0])
Expand Down

0 comments on commit efc305e

Please sign in to comment.