-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add pipeline step for COMMIT streamline filtering #30
Conversation
Add step to the structural pipeline to perform streamline filtering using COMMIT2. This is a first working draft and various details need to be further improved.
This commit improves the writeFibers function by adding the capability to create new directories if the fiber file is in a non-existent directory.
The fiber shift parameter is set to zero. I have verified that it is the correct value using the COMMIT-debugger with simulated DWI data from the test suite.
Set the n_count parameter in the header of fiber cloud files in the writeFibers function. This parameter is useful in general and should always be added to the header.
Add the regularization parameter lambda as configuration parameter in CATO.
Add a first version of the documentation on the installation and usage of the COMMIT-Filter add-on.
Other software packages (e.g. nibabel) use the first element of the `dim` parameter in the NIfTI header to determine the number of dimensions.
…lab/CATO into feature-COMMIT-filtering
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! Made some small remarks in the code.
How would you feel about putting the whole thing in a separate "addons" directory, either directly in 'src/' or within 'structural_pipeline/'?
Still need to actually run this, should be able to do that today.
Instead of specifying the location of the python binary in the `pythonInterpreter` parameter, users can now specify a start-up script with the `pythonInterpreter` parameter that can activate their desired environment. This change has the benefit that the COMMIT script also works when no startup script is provided and that it is more in line with the usual usage of activating Anaconda environments from the command line.
Updated python code style using black, isort and flake8 (ignoring E501).
Thank you for the comments!
During the development of this COMMIT post-processing step, it became clear that it would be best considered as an "add-on", i.e. a customisable pipeline step that users can add to the toolbox . With this in mind, I agree that it would be most appropriate to place the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I successfully executed the COMMIT addon, this is ready to go from my site!
This pull request adds a new post-processing pipeline step to the structural pipeline, which filters streamlines in the fiber cloud using COMMIT2 and reconstructs connectivity matrices from the resulting filtered fiber cloud. The current implementation is based on the basic example provided on the COMMIT2 wiki.
To Do
Verify that the COMMIT input files (fiber cloud and DWI file) are correctly aligned with the expected space used in COMMIT.--> Updated and verified in 1dbcb84.
Determine the best COMMIT parameter values to use as default values.--> Lambda is included as a configuration parameter in d8d315d.
Investigate whether a more advanced filtering technique using COMMIT would be suitable for the CATO pipeline.--> The currently used model
StickZeppelinBall
seems a good choice. More advanced models (e.g.Sticks-Zeppelins-Balls
) have more regularisation parameters that are tricky to tune.Improve the code to conform the general coding style.--> Code style is addressed in commit b501bfa.
Incorporate testing for this pipeline step into the testing framework.--> Because this is an add-on it is not included in the standard test framework. The functioning was tested on two datasets.
Add documentation.--> First version of the documentation is added in 77decad.