Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a CLI option that allows for user-defined mask instead of adaptive_mask for denoising #862

Closed
CesarCaballeroGaudes opened this issue Mar 25, 2022 · 11 comments

Comments

@CesarCaballeroGaudes
Copy link
Contributor

CesarCaballeroGaudes commented Mar 25, 2022

Summary

Create a CLI option that allows to use a user-defined mask for denoising purposes instead of using the default adaptive mask.

Additional Detail

We are observing that the adaptive mask does not include the subcortical regions in multiple datasets but in some cases the user might want to still denoise these voxels, and prefer to use their own mask (previously computed).

Next Steps

A possible solution could be adding a CLI option that does not apply the adaptive mask for denoising, e.g. -use_mask_denoising, and create an IF statement after the computation of the adaptive mask (maybe report a warning) and the mask used for classification so that mask_denoise is set to the mask provided by the user.
For instance, the IF statement could be here

@jbteves
Copy link
Collaborator

jbteves commented Mar 25, 2022

Hi @CesarCaballeroGaudes, the denoising process requires a minimum of 3 echoes to calculate the metrics we use, so the limit is in place to avoid introducing errors or subtle bugs. The only way we could get denoising with fewer echoes is to add a new method which requires fewer echoes, at which point we could have a step-dependent mask.

@CesarCaballeroGaudes
Copy link
Contributor Author

I agree. I was suggesting that the denoising is applied to the voxels within the mask defined by the user, but the rest of the process and the classification can be done with the mask that requires a minimum of 3 echoes. I think it could add some flexibility.

@tsalo
Copy link
Member

tsalo commented Mar 28, 2022

I believe the changes I made for a paper I'm working on (in #837) essentially do that.

@handwerkerd
Copy link
Member

This request seems to be a special case for a potentially more useful parameter. Right now, the adaptive mask is calculated in ./tedana/utils.py/make_adaptive_mask:

tedana/tedana/utils.py

Lines 79 to 86 in 65e8ddf

# get 33rd %ile of `first_echo` and find corresponding index
# NOTE: percentile is arbitrary
perc = np.percentile(first_echo, 33, interpolation="higher")
perc_val = echo_means[:, 0] == perc
# extract values from all echos at relevant index
# NOTE: threshold of 1/3 voxel value is arbitrary
lthrs = np.squeeze(echo_means[perc_val].T) / 3

In that code, we have two hard-coded values:

  1. For the shortest TE echo, we identify the voxel with the 33rd percentile magnitude: perc_val
  2. For each echo, divide that the value in that voxel by 3 and that's the masking threshold lthrs

This is a fairly conservative masking approach, but I can see reasons why people would want to adjust the threshold. Maybe they want to keep a bit more of the dropout region or maybe they want to more aggressively restrict the mask to voxels with minimal dropout.

My proposal would be:

  • Make both the hard-coded 33 for the perc calculation and the 3 for the lthrs calculations optional parameters for make_adaptive_mask
  • Make it possible to set the 3 lthrs calculations in the CLI. Ugly name suggestion: mask_dropout_thresh
  • If that is set to a very large number (or maybe create a special -1 case), The dropout threshold would effectively be 0. This could address the request here.
  • Always calculate the threshold with 3 and raise a warning with the number of fewer voxels that are being excluded from the mask.

Thoughts?

@CesarCaballeroGaudes
Copy link
Contributor Author

Thanks @tsalo for the code. Indeed, it does what we intended to do.

I like @handwerkerd 's suggestions, particularly the -1 case. @eurunuela and I will try to implement it.

@eurunuela
Copy link
Collaborator

After a couple of hours looking through the code we realized that our issue was not in the creation of the adaptive mask, but on the mask that nilearn creates with compute_epi_mask().

We don't have any issues when we provide our own mask with --mask. Actually, I wonder if this is the case with #827; i.e., that the missing regions are a result of computing the mask with nilearn.

Re implementing user-defined percentile and threshold: do tedana devs think this would be useful for users?

@jbteves
Copy link
Collaborator

jbteves commented Mar 29, 2022

I would much rather users with the need to fine tune a mask simply pass their own mask.

@handwerkerd
Copy link
Member

@eurunuela Just to clarify, are you saying that this issue is resolved if you use the existing --mask option? If so, maybe we can move some of this discussion in adding flexibility to the masking into #837 and close this issue?

I do think there's a benefit to allowing a user-defined mask because a user might want to set extra criteria that we don't want to have to deal with. For example, they might want to create a mask using a higher resolution anatomical image to make sure all fMRI voxels, not only have signal, but are within the brain. The automask in nilearn, should be good at this type of thing, but not as good as using a high res anatomical.

@eurunuela
Copy link
Collaborator

@eurunuela Just to clarify, are you saying that this issue is resolved if you use the existing --mask option? If so, maybe we can move some of this discussion in adding flexibility to the masking into #837 and close this issue?

That seems to be the case for us. We only tested on one session, but I'll know about the rest of the dataset tomorrow.

Happy to move the discussion re flexibility to #837.

I also wonder if @tsalo used compute_epi_mask() for his analysis in #827, and if so, whether using --mask solves the issue.

@tsalo
Copy link
Member

tsalo commented Mar 29, 2022

@eurunuela I used the fMRIPrep brain mask. The problem in my case was that CSF and WM don't have enough good echoes to be retained in the denoising mask (they obviously shouldn't be included in the classification mask), which made then doing aCompCor on the denoised data impossible. Basically, both --mask and compute_epi_mask retain WM/CSF voxels in the initial mask, but then make_adaptive_mask and tedana's thresholding ended up removing them.

@eurunuela
Copy link
Collaborator

I forgot to update you on this. We ended up using the --mask option to recover subcortical regions.

Going to close the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants