Skip to content
This repository has been archived by the owner on Dec 20, 2024. It is now read-only.

ENH: Adds Sparse Fascicle and Gaussian Process models #60

Merged
merged 6 commits into from
Dec 10, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
352 changes: 352 additions & 0 deletions docs/notebooks/Testing GP model.ipynb

Large diffs are not rendered by default.

76 changes: 66 additions & 10 deletions eddymotion/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import nest_asyncio

import numpy as np
from dipy.core.gradients import gradient_table
from dipy.core.gradients import check_multi_b, gradient_table

nest_asyncio.apply()

Expand All @@ -25,7 +25,7 @@ def init(gtab, model="DTI", **kwargs):
An array representing the gradient table in RAS+B format.
model : :obj:`str`
Diffusion model.
Options: ``"3DShore"``, ``"SFM"``, ``"DTI"``, ``"DKI"``, ``"S0"``
Options: ``"3DShore"``, ``"SFM"``, ``"GP"``, ``"DTI"``, ``"DKI"``, ``"S0"``
Return
------
Expand Down Expand Up @@ -53,15 +53,18 @@ def init(gtab, model="DTI", **kwargs):
"lambdaL": 1e-8,
}

elif model.lower().startswith("sfm"):
from eddymotion.utils.model import (
SFM4HMC as Model,
ExponentialIsotropicModel,
)
elif model.lower() in ("sfm", "gp"):
Model = SparseFascicleModel
param = {"solver": "ElasticNet"}

param = {
"isotropic": ExponentialIsotropicModel,
}
if model.lower() == "gp":
from sklearn.gaussian_process import GaussianProcessRegressor
param = {"solver": GaussianProcessRegressor}
Copy link
Contributor

@dPys dPys Dec 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the long-term, I'd bet we get more miles out of using https://docs.pymc.io/en/v3/Gaussian_Processes.html in place of sklearn's GaussianProcessRegressor but for now this implementation should be fine!?

RE: tuning this, we could start by trying to make alpha very small like what we did with the elastic-net case, but I don't know about the kernel hyperparameters. @arokem -- would the idea here be that by wrapping this in SFM, we'd avoid the need for learning the kernel's shape? The default used for GP in sklearn is ConstantKernel(1.0, constant_value_bounds="fixed" * RBF(1.0, length_scale_bounds="fixed").

If we're sticking with this default, then I think that either way, we should remove the fixed option since "kernel hyperparameters are optimized during fitting unless the bounds are marked as fixed" and from Andersson & Sotiropoulos 2015: "The smoothness is determined from hyperparameters whose values are determined directly from the data" and "the hyperparameters will be such that the predictions are a smooth function of gradient direction and, in the case of multi-shell data, b-value." Also noteworthy was this point: "the data given by original hyperparameters is very sharp and even for a dataset with 300 points will give almost half the weight to the center-point (i.e., when predicting the signal for a diffusion direction half the information comes from the point itself)."

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And for selecting the kernel, https://www.sciencedirect.com/science/article/pii/S1053811915006874?via%3Dihub#:~:text=Finding%20k(x,scaling%20or%20variance. they suggest using either exponential or spherical and then optimizing with maximum likelihood


multi_b = check_multi_b(gtab, 2, non_zero=False)
if multi_b:
from dipy.reconst.sfm import ExponentialIsotropicModel
param.update({"isotropic": ExponentialIsotropicModel})

elif model.lower() in ("dti", "dki"):
Model = DTIModel if model.lower() == "dti" else DKIModel
Expand Down Expand Up @@ -332,6 +335,59 @@ def predict(self, gradient, **kwargs):
return retval


class SparseFascicleModel:
"""
A wrapper of :obj:`dipy.reconst.sfm.SparseFascicleModel.
"""

__slots__ = ("_model", "_S0", "_mask", "_solver")

def __init__(self, gtab, S0=None, mask=None, solver=None, **kwargs):
"""Instantiate the wrapped model."""
from dipy.reconst.sfm import SparseFascicleModel

self._S0 = None
if S0 is not None:
self._S0 = np.clip(
S0.astype("float32") / S0.max(),
a_min=1e-5,
a_max=1.0,
)

self._mask = mask
if mask is None and S0 is not None:
self._mask = self._S0 > np.percentile(self._S0, 35)

if self._mask is not None:
self._S0 = self._S0[self._mask.astype(bool)]

self._solver = solver
if solver is None:
self._solver = "ElasticNet"

kwargs = {k: v for k, v in kwargs.items() if k in ("solver",)}
self._model = SparseFascicleModel(gtab, **kwargs)

def fit(self, data, **kwargs):
"""Clean-up permitted args and kwargs, and call model's fit."""
self._model = self._model.fit(data[self._mask, ...])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For parallelization, we can divide the data up into chunks before calling this, and then call this separately on each chunk (on a separate thread/process)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @arokem, I'm going to hand it over to @dPys to work on the chunking in a separate PR

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arokem and @oesteban -- Did we ever decide on whether the best approach for this would be to migrate @sebastientourbier 's asyncio parallelization routines for the DTI model to a base class for parallel fit/predict that could be imported to each of the SFM and DKI API's as well?

On a separate note-- would it be better to stick with asyncio, switch to joblib, or expose both approaches as optional?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would stick with a copy of Sebastien's solution here. Let's worry about a more elegant OO design after we see that this works well here.


def predict(self, gradient, **kwargs):
"""Propagate model parameters and call predict."""
predicted = np.squeeze(
self._model.predict(
_rasb2dipy(gradient),
S0=self._S0,
)
)
if predicted.ndim == 3:
return predicted

retval = np.zeros_like(self._mask, dtype="float32")
retval[self._mask, ...] = predicted
return retval


def _rasb2dipy(gradient):
gradient = np.asanyarray(gradient)
if gradient.ndim == 1:
Expand Down
5 changes: 4 additions & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,11 @@ url = https://github.com/nipreps/EddyMotionCorrection
python_requires = >=3.7
install_requires =
dipy>=1.3.0
scikit-image>=0.14.2
nipype>= 1.5.1, < 2.0
nitransforms>=21.0.0
nest-asyncio>=1.5.1
scikit-image>=0.14.2
scikit-learn>=1.0.1
test_requires =
codecov
coverage
Expand Down