-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should parallelization be handled upstream? #16
Comments
Yes :) |
OK - well - let's keep this open if that's fine by you, so that we remember to refactor that code once the parallelization code is incorporated into DIPY (hopefully in the upcoming release). |
Good news! dipy/dipy#2593 got merged 😄 |
Actually, reopening, because we still need to remove the parallelization code from here. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This is a suggestion to simplify this:
https://github.com/nipreps/eddymotion/blob/b1f70cc67417ac2fab43ccf5196b9b26397b2326/src/eddymotion/model/base.py#L160
We are in the midst of some work to parallelize model fitting in DIPY itself: dipy/dipy#2593, which would benefit all downstream uses of these models, including here. I wonder whether we should aim to use these methods as implemented there (e.g., by passing through parallelization kwargs to DIPY's multi-voxel parallelization methods), instead of implementing that separately here.
The text was updated successfully, but these errors were encountered: