Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Processing native-resolution runs with different voxel sizes will fail #1069

Closed
tsalo opened this issue Feb 29, 2024 · 4 comments · Fixed by #1075
Closed

Processing native-resolution runs with different voxel sizes will fail #1069

tsalo opened this issue Feb 29, 2024 · 4 comments · Fixed by #1075
Assignees
Labels
bug Issues noting problems and PRs fixing those problems.

Comments

@tsalo
Copy link
Member

tsalo commented Feb 29, 2024

Summary

When processing multiple runs in the same standard space (MNI152NLin2009cAsym) with the native BOLD resolution, if the different runs have different native resolutions (in this case, 1.625 x 1.625 x 3.992 mm voxels in one run and 1.625 x 1.625 x 3.99 mm in another), then XCP-D will fail with the following error:

Traceback:
	Traceback (most recent call last):
	  File "/usr/local/miniconda/lib/python3.10/site-packages/nipype/interfaces/base/core.py", line 397, in run
	    runtime = self._run_interface(runtime)
	  File "/usr/local/miniconda/lib/python3.10/site-packages/xcp_d/interfaces/connectivity.py", line 105, in _run_interface
	    n_voxels_in_masked_parcels = sum_masker_masked.fit_transform(atlas_img_bin)
	  File "/usr/local/miniconda/lib/python3.10/site-packages/sklearn/utils/_set_output.py", line 273, in wrapped
	    data_to_wrap = f(self, X, *args, **kwargs)
	  File "/usr/local/miniconda/lib/python3.10/site-packages/nilearn/maskers/nifti_labels_masker.py", line 455, in fit_transform
	    return self.fit().transform(imgs, confounds=confounds,
	  File "/usr/local/miniconda/lib/python3.10/site-packages/nilearn/maskers/nifti_labels_masker.py", line 376, in fit
	    raise ValueError(
	ValueError: Regions and mask do not have the same affine.
	labels_img: /out/xcp_d/atlases/atlas-4S156Parcels/space-MNI152NLin2009cAsym_atlas-4S156Parcels_dseg.nii.gz
	mask_img: /fmriprep/sub-002/func/sub-002_task-rest_run-2_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz

This was originally brought up in this NeuroStars post.

@tsalo tsalo added the bug Issues noting problems and PRs fixing those problems. label Feb 29, 2024
@tsalo
Copy link
Member Author

tsalo commented Feb 29, 2024

I'm not sure what the best way to handle this is. Here are a few options:

  1. Warp atlases to individual runs and write them out to the subject's func folder.
    • This would massively blow up the number of derivatives, so I don't like this idea.
  2. Raise an error if this problem is detected, and recommend that users set a non-native resolution in their preprocessing run or that they use a tool like CuBIDS to curate their datasets before preprocessing.
    • Given that this bug will apply to the concatenation workflow as well (at least for the concatenated dense time series), this might be the best option.
    • Ultimately, I would like to start using the nipreps resampler tool, which would fix the issue by letting users define a target resolution.
  3. Resample volumetric data to the same resolution if this problem is detected.
    • This would require a major refactor, AFAICT, since we'd need to collect the BOLD runs and identify the appropriate resolution (most common, lowest, highest?) before processing, and then resample them at some point in the pipeline.
  4. Somehow identify all unique resolutions and then write out different resolution atlases with unique identifiers.
    • This wouldn't fix the problem for the concatenation workflow though.

@tsalo
Copy link
Member Author

tsalo commented Feb 29, 2024

@mattcieslak proposed initially warping the atlases with ANTs to the highest resolution available (not sure if you meant 1 mm3 or the highest resolution of the available runs) and then relying on Nilearn's maskers' resampling capability to do the final resolution-based resampling.

@tsalo
Copy link
Member Author

tsalo commented Mar 4, 2024

I say we raise an error for the immediate future and then figure out a better strategy later.

@tsalo
Copy link
Member Author

tsalo commented Mar 5, 2024

Once #1075 is merged, I want to make #1076 the next step.

@tsalo tsalo self-assigned this Mar 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Issues noting problems and PRs fixing those problems.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant