-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimal diffusivity in NODDI model #150
Comments
Hi, I think this approach is not implemented yet. Not sure if Alessandro is planning to add this feature in the near future. This should not be difficult, from a technical point of view, but before including that approach in the repository, a lot of validations would be required. Thanks, Erick. |
Hi @mblesac and @ejcanalesr , We didn't implement it because it would suffice to repeat the fit over a range of diffusivities (each time the fit is performed using a different one) and then taking as "solution" the one with the corresponding minimum RMSE. Something like (in metacode): import amico
AMICO = amico.Evaluation( ... )
...
AMICO.set_model('NODDI')
for dPar in [ 1.5e-3, 1.7e-3, 1.9e-3 ]:
AMICO.model.set( dPar, ... )
AMICO.generate_kernels( ... )
...
AMICO.fit()
AMICO.save_results( path_suffix=f'_dPar={dPar:.2e}' ) and then checking the solution corresponding to the minimum error (i.e. looking at This is what you had in mind, right, Erick? |
Hi Ale, The idea would be to find, for each voxel, the optimal diffusivity that is producing the smallest "nrmse" value from all the FIT_nrmse.nii.gz resulting images. Then, a new image for each microstructure parameter must be created by writing the value that was estimated with the optimal diffusivity at each voxel. This seems to be fine in theory, but I am not 100% sure it is going to work in practice, as the residuals may be affected by the regularization (although we use the same regularization factor, so results should be consistent!). Manuel, I am very curious to know the results in case you want to explore this approach. If this work, it could be published in a journal paper. All the best, |
Hi, Thanks for your response! @ejcanalesr I think this is what was done in this manuscript. I'll try to give it a go after Christmas and see if I find a good implementation of this. Best regards, Manuel |
Great, perfect! |
Hi @daducci and @ejcanalesr, It took me a while, but with the help of a colleague I have a code that works. The code is far from optimal, it would be great if you could take a look at it. I did some experiments and the results look great. Please take a look to the ODI maps for one subject (for the original matlab implementation the mask used was different, less restrictive): What do you think? I guess it also raises the question, that how comparable are the maps across subjects if the parallel diffusivity changes at each voxel? Thanks in advance. Best regards, Manuel |
Hi Manuel,
Nice results!
My suggestion would be to fit a DTI model just to verify if d_par is around 1.7 in WM regions of parallel fibres. If that is the case, you may use the standard value in AMICO-NODDI, since this also will help you to avoid technical questions from the reviewers.
However, if you have reasons to believe that d_parp is different in your data, then you have two options and I am not sure which one is the best.
You can run the new approach in a few subjects to identify the mean d_parp, and then you can fix this value for the whole population.
Alternatively, you could estimate the optimal d_parp voxelwise. The main limitation is that there are no previous works to support this strategy, so this means that you may get some technical questions from the reviewers, which may be beyond the aim of your study.
To be sure the new method is working, you can create some simulated data with different d_par values to see if you can recover the actual values. This also may be useful to show the bias induced by the standard method which uses a fixed "wrong" d_par.
It is true that the maps will be different across participants, however, this is also valid for all the other dMRI models (e.g., DTI, DKI) including the spherical mean technique that is the model more similar to NODDI. However, you can ask the inverse question, how comparable are the maps if d_par is different across the voxels (and across groups of subjects) and we assume it is constant?
Perhaps Alessandro can complement my comments...
Thanks for sharing your results!
All the best,
Erick
…________________________________
From: mblesac ***@***.***>
Sent: Tuesday, April 25, 2023 3:27 PM
To: daducci/AMICO
Cc: Canales Rodriguez Erick Jorge; Mention
Subject: Re: [daducci/AMICO] Optimal diffusivity in NODDI model (Issue #150)
Hi @daducci<https://github.com/daducci> and @ejcanalesr<https://github.com/ejcanalesr>,
It took me a while, but with the help of a colleague I have a code that works. The code is far from optimal, it would be great if you could take a look at it.
I did some experiments and the results look great. Please take a look to the ODI maps for one subject (for the original matlab implementation the mask used was different, less restrictive):
[image]<https://user-images.githubusercontent.com/47448614/234290802-0bef6a83-0040-4c95-8f5b-56939b84c22e.png>
What do you think? I guess it also raises the question, that how comparable are the maps across subjects if the parallel diffusivity changes at each voxel? Thanks in advance.
Best regards,
Manuel
—
Reply to this email directly, view it on GitHub<#150 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ACFIXLJNUCPUVNTBI6GKA6DXC7GNLANCNFSM6AAAAAASJAUTL4>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Hi, Thanks Erick. I have few updates on this. I fitted the DTI model on the 750 shell (my acquisition is 16x0, 3x200, 6x500, 64x750 and 64x2500 2mm iso) and I generated a map with the optimal d par selected in each voxel. The map has similar values to the AD: Generally speaking, it seems that the WM has a lot of variability, with AD values between 0.16 and 0.25 approx, the GM the values are around 0.11 to 0.16. Indeed if I threshold both images I can see quite striking separation: I think in this cohort, the variability is very important. However, the optimal maps for ICVF and the ISOVF don't look as expected at all: Any clue what could be going on? I've checked the code a couple of times and it looks fine... Thanks in advance. Best regards, Manuel |
Dear experts,
I was reading a post about the optimal parallel diffusivity and I was wondering if the approach of fitting by using multiple parallel diffusivities has been implemented or if there are any plans to do it. This will be really helpful for other cohorts outside the healthy adult human brain. Thanks in advance.
Best regards,
Manuel
The text was updated successfully, but these errors were encountered: