Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main' into nipreps-config
Browse files Browse the repository at this point in the history
  • Loading branch information
tsalo committed Oct 17, 2023
2 parents 97ab6e4 + 8a07b53 commit 5dfcf2e
Show file tree
Hide file tree
Showing 13 changed files with 273 additions and 64 deletions.
4 changes: 2 additions & 2 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -86,5 +86,5 @@ keywords:
- BIDS-App
- Neuroimaging
license: BSD-3-Clause
version: 0.5.0
date-released: '2023-09-13'
version: 0.5.1
date-released: '2023-10-11'
25 changes: 25 additions & 0 deletions docs/changes.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,30 @@
# What's New

<!-- Release notes generated using configuration in .github/release.yml at main -->

## 0.5.1

The 0.5.1 fixes some bugs for the XCP-D manuscript.

### 🛠 Breaking Changes
* Update the QC metrics by @tsalo in https://github.com/PennLINC/xcp_d/pull/958

### 🎉 Exciting New Features
* Split HCP tasks into task and run entities by @tsalo in https://github.com/PennLINC/xcp_d/pull/952
* Concatenate across directions as well as runs by @tsalo in https://github.com/PennLINC/xcp_d/pull/965

### 🐛 Bug Fixes
* Fix resting-state plots in executive summary by @tsalo in https://github.com/PennLINC/xcp_d/pull/941
* Load T1w-to-standard transform to same space as volumetric BOLD scan by @tsalo in https://github.com/PennLINC/xcp_d/pull/926
* Pin Nilearn version by @tsalo in https://github.com/PennLINC/xcp_d/pull/955
* Don't interpolate volumes at beginning/end of run by @tsalo in https://github.com/PennLINC/xcp_d/pull/950

### Other Changes
* Update documentation for 0.5.0 release by @tsalo in https://github.com/PennLINC/xcp_d/pull/937


**Full Changelog**: https://github.com/PennLINC/xcp_d/compare/0.5.0...0.5.1

## 0.5.0

The 0.5.0 release prepares for the XCP-D manuscript, so I plan to not introduce any backwards-incompatible changes between this release and 1.0.0 (the official paper release).
Expand Down
19 changes: 17 additions & 2 deletions docs/workflows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -277,13 +277,16 @@ Denoising
=========
:class:`~xcp_d.interfaces.nilearn.DenoiseNifti`, :class:`~xcp_d.interfaces.nilearn.DenoiseCifti`

Temporal censoring
------------------
Temporal censoring [OPTIONAL]
-----------------------------

Prior to confound regression, high-motion volumes will be removed from the BOLD data.
These volumes will also be removed from the nuisance regressors.
Please refer to :footcite:t:`power2012spurious` for more information.

.. tip::
Censoring can be disabled by setting ``--fd-thresh 0``.


Confound regression
-------------------
Expand Down Expand Up @@ -339,6 +342,18 @@ Interpolation

An interpolated version of the ``denoised BOLD`` is then created by filling in the high-motion
outlier volumes with cubic spline interpolated data, as implemented in ``Nilearn``.

.. warning::
In versions 0.4.0rc2 - 0.5.0, XCP-D used cubic spline interpolation,
followed by bandpass filtering.

However, cubic spline interpolation can introduce large spikes and drops in the signal
when the censored volumes are at the beginning or end of the run,
which are then propagated to the filtered data.

To address this, XCP-D now replaces interpolated volumes at the edges of the run with the
closest non-outlier volume's data, as of 0.5.1.

The resulting ``interpolated, denoised BOLD`` is primarily used for bandpass filtering.


Expand Down
71 changes: 48 additions & 23 deletions xcp_d/interfaces/plotting.py
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ class QCPlots(SimpleInterface):
output_spec = _QCPlotsOutputSpec

def _run_interface(self, runtime):
# Load confound matrix and load motion with motion filtering
# Load confound matrix and load motion without motion filtering
confounds_df = pd.read_table(self.inputs.fmriprep_confounds_file)
preproc_motion_df = load_motion(
confounds_df.copy(),
Expand All @@ -297,6 +297,7 @@ def _run_interface(self, runtime):
censoring_df = pd.read_table(self.inputs.temporal_mask)
tmask_arr = censoring_df["framewise_displacement"].values
num_censored_volumes = int(tmask_arr.sum())
num_retained_volumes = int((tmask_arr == 0).sum())

# Apply temporal mask to interpolated/full data
rmsd_censored = rmsd[tmask_arr == 0]
Expand Down Expand Up @@ -382,40 +383,57 @@ def _run_interface(self, runtime):

# Calculate QC measures
mean_fd = np.mean(preproc_fd_timeseries)
mean_rms = np.nanmean(rmsd_censored) # first value can be NaN if no dummy scans
mean_fd_post_censoring = np.mean(postproc_fd_timeseries)
mean_relative_rms = np.nanmean(rmsd_censored) # first value can be NaN if no dummy scans
mean_dvars_before_processing = np.mean(dvars_before_processing)
mean_dvars_after_processing = np.mean(dvars_after_processing)
motionDVCorrInit = np.corrcoef(preproc_fd_timeseries, dvars_before_processing)[0][1]
motionDVCorrFinal = np.corrcoef(postproc_fd_timeseries, dvars_after_processing)[0][1]
fd_dvars_correlation_initial = np.corrcoef(preproc_fd_timeseries, dvars_before_processing)[
0, 1
]
fd_dvars_correlation_final = np.corrcoef(postproc_fd_timeseries, dvars_after_processing)[
0, 1
]
rmsd_max_value = np.nanmax(rmsd_censored)

# A summary of all the values
qc_values_dict.update(
{
"meanFD": [mean_fd],
"relMeansRMSMotion": [mean_rms],
"relMaxRMSMotion": [rmsd_max_value],
"meanDVInit": [mean_dvars_before_processing],
"meanDVFinal": [mean_dvars_after_processing],
"mean_fd": [mean_fd],
"mean_fd_post_censoring": [mean_fd_post_censoring],
"mean_relative_rms": [mean_relative_rms],
"max_relative_rms": [rmsd_max_value],
"mean_dvars_initial": [mean_dvars_before_processing],
"mean_dvars_final": [mean_dvars_after_processing],
"num_dummy_volumes": [dummy_scans],
"num_censored_volumes": [num_censored_volumes],
"nVolsRemoved": [dummy_scans],
"motionDVCorrInit": [motionDVCorrInit],
"motionDVCorrFinal": [motionDVCorrFinal],
"num_retained_volumes": [num_retained_volumes],
"fd_dvars_correlation_initial": [fd_dvars_correlation_initial],
"fd_dvars_correlation_final": [fd_dvars_correlation_final],
}
)

qc_metadata = {
"meanFD": {
"mean_fd": {
"LongName": "Mean Framewise Displacement",
"Description": (
"Average framewise displacement without any motion parameter filtering. "
"This value includes high-motion outliers, but not dummy volumes. "
"FD is calculated according to the Power definition."
),
"Units": "mm",
"Units": "mm / volume",
"Term URL": "https://doi.org/10.1016/j.neuroimage.2011.10.018",
},
"mean_fd_post_censoring": {
"LongName": "Mean Framewise Displacement After Censoring",
"Description": (
"Average framewise displacement without any motion parameter filtering. "
"This value does not include high-motion outliers or dummy volumes. "
"FD is calculated according to the Power definition."
),
"Units": "mm / volume",
"Term URL": "https://doi.org/10.1016/j.neuroimage.2011.10.018",
},
"relMeansRMSMotion": {
"mean_relative_rms": {
"LongName": "Mean Relative Root Mean Squared",
"Description": (
"Average relative root mean squared calculated from motion parameters, "
Expand All @@ -424,7 +442,7 @@ def _run_interface(self, runtime):
),
"Units": "arbitrary",
},
"relMaxRMSMotion": {
"max_relative_rms": {
"LongName": "Maximum Relative Root Mean Squared",
"Description": (
"Maximum relative root mean squared calculated from motion parameters, "
Expand All @@ -433,44 +451,51 @@ def _run_interface(self, runtime):
),
"Units": "arbitrary",
},
"meanDVInit": {
"mean_dvars_initial": {
"LongName": "Mean DVARS Before Postprocessing",
"Description": (
"Average DVARS (temporal derivative of root mean squared variance over "
"voxels) calculated from the preprocessed BOLD file, after dummy scan removal."
),
"TermURL": "https://doi.org/10.1016/j.neuroimage.2011.02.073",
},
"meanDVFinal": {
"mean_dvars_final": {
"LongName": "Mean DVARS After Postprocessing",
"Description": (
"Average DVARS (temporal derivative of root mean squared variance over "
"voxels) calculated from the denoised BOLD file."
),
"TermURL": "https://doi.org/10.1016/j.neuroimage.2011.02.073",
},
"num_dummy_volumes": {
"LongName": "Number of Dummy Volumes",
"Description": (
"The number of non-steady state volumes removed from the time series by XCP-D."
),
},
"num_censored_volumes": {
"LongName": "Number of Censored Volumes",
"Description": (
"The number of high-motion outlier volumes censored by XCP-D. "
"This does not include dummy volumes."
),
},
"nVolsRemoved": {
"LongName": "Number of Dummy Volumes",
"num_retained_volumes": {
"LongName": "Number of Retained Volumes",
"Description": (
"The number of non-steady state volumes removed from the time series by XCP-D."
"The number of volumes retained in the denoised dataset. "
"This does not include dummy volumes or high-motion outliers."
),
},
"motionDVCorrInit": {
"fd_dvars_correlation_initial": {
"LongName": "FD-DVARS Correlation Before Postprocessing",
"Description": (
"The Pearson correlation coefficient between framewise displacement and DVARS "
"(temporal derivative of root mean squared variance over voxels), "
"after removal of dummy volumes, but before removal of high-motion outliers."
),
},
"motionDVCorrFinal": {
"fd_dvars_correlation_final": {
"LongName": "FD-DVARS Correlation After Postprocessing",
"Description": (
"The Pearson correlation coefficient between framewise displacement and DVARS "
Expand Down
36 changes: 20 additions & 16 deletions xcp_d/interfaces/report.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,13 @@
\t\t<ul class="elem-desc">
\t\t\t<li>BOLD volume space: {space}</li>
\t\t\t<li>Repetition Time (TR): {TR:.03g}</li>
\t\t\t<li>Mean Framewise Displacement: {meanFD}</li>
\t\t\t<li>Mean Relative RMS Motion: {meanRMS}</li>
\t\t\t<li>Max Relative RMS Motion: {maxRMS}</li>
\t\t\t<li>Mean Framewise Displacement: {mean_fd}</li>
\t\t\t<li>Mean Relative RMS Motion: {mean_relative_rms}</li>
\t\t\t<li>Max Relative RMS Motion: {max_relative_rms}</li>
\t\t\t<li>DVARS Before and After Processing : {dvars_before_after}</li>
\t\t\t<li>Correlation between DVARS and FD Before and After Processing : {corrfddv}</li>
\t\t\t<li>Number of Volumes Censored : {volcensored}</li>
\t\t\t<li>Correlation between DVARS and FD Before and After Processing :
{fd_dvars_correlation}</li>
\t\t\t<li>Number of Volumes Censored : {num_vols_censored}</li>
\t\t</ul>
"""

Expand Down Expand Up @@ -145,25 +146,28 @@ class FunctionalSummary(SummaryInterface):
def _generate_segment(self):
space = get_entity(self.inputs.bold_file, "space")
qcfile = pd.read_csv(self.inputs.qc_file)
meanFD = str(round(qcfile["meanFD"][0], 4))
meanRMS = str(round(qcfile["relMeansRMSMotion"][0], 4))
maxRMS = str(round(qcfile["relMaxRMSMotion"][0], 4))
dvars = f"{round(qcfile['meanDVInit'][0], 4)}, {round(qcfile['meanDVFinal'][0], 4)}"
mean_fd = str(round(qcfile["mean_fd"][0], 4))
mean_relative_rms = str(round(qcfile["mean_relative_rms"][0], 4))
max_relative_rms = str(round(qcfile["max_relative_rms"][0], 4))
dvars = (
f"{round(qcfile['mean_dvars_initial'][0], 4)}, "
f"{round(qcfile['mean_dvars_final'][0], 4)}"
)
fd_dvars_correlation = (
f"{round(qcfile['motionDVCorrInit'][0], 4)}, "
f"{round(qcfile['motionDVCorrFinal'][0], 4)}"
f"{round(qcfile['fd_dvars_correlation_initial'][0], 4)}, "
f"{round(qcfile['fd_dvars_correlation_final'][0], 4)}"
)
num_vols_censored = str(round(qcfile["num_censored_volumes"][0], 4))

return QC_TEMPLATE.format(
space=space,
TR=self.inputs.TR,
meanFD=meanFD,
meanRMS=meanRMS,
maxRMS=maxRMS,
mean_fd=mean_fd,
mean_relative_rms=mean_relative_rms,
max_relative_rms=max_relative_rms,
dvars_before_after=dvars,
corrfddv=fd_dvars_correlation,
volcensored=num_vols_censored,
fd_dvars_correlation=fd_dvars_correlation,
num_vols_censored=num_vols_censored,
)


Expand Down
15 changes: 15 additions & 0 deletions xcp_d/tests/test_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
import numpy as np
import pandas as pd
import pytest
from nipype import logging
from pkg_resources import resource_filename as pkgrf

from xcp_d.cli import combineqc
Expand All @@ -19,6 +20,8 @@
run_command,
)

LOGGER = logging.getLogger("nipype.utils")


@pytest.mark.ds001419_nifti
def test_ds001419_nifti(data_dir, output_dir, working_dir):
Expand Down Expand Up @@ -168,6 +171,17 @@ def test_pnc_cifti(data_dir, output_dir, working_dir):
test_data_dir = get_test_data_path()
filter_file = os.path.join(test_data_dir, "pnc_cifti_filter.json")

# Make the last few volumes outliers to check https://github.com/PennLINC/xcp_d/issues/949
motion_file = os.path.join(
dataset_dir,
"sub-1648798153/ses-PNC1/func/"
"sub-1648798153_ses-PNC1_task-rest_acq-singleband_desc-confounds_timeseries.tsv",
)
motion_df = pd.read_table(motion_file)
motion_df.loc[56:, "trans_x"] = np.arange(1, 5) * 20
motion_df.to_csv(motion_file, sep="\t", index=False)
LOGGER.warning(f"Overwrote confounds file at {motion_file}.")

parameters = [
dataset_dir,
out_dir,
Expand All @@ -176,6 +190,7 @@ def test_pnc_cifti(data_dir, output_dir, working_dir):
"--nthreads=2",
"--omp-nthreads=2",
f"--bids-filter-file={filter_file}",
"--min-time=60",
"--nuisance-regressors=acompcor_gsr",
"--despike",
"--head_radius=40",
Expand Down
49 changes: 49 additions & 0 deletions xcp_d/tests/test_utils_bids.py
Original file line number Diff line number Diff line change
Expand Up @@ -243,3 +243,52 @@ def test_get_entity(datasets):
)
with pytest.raises(ValueError, match="Unknown space"):
xbids.get_entity(fname, "space")


def test_group_across_runs():
"""Test group_across_runs."""
in_files = [
"/path/sub-01_task-axcpt_run-03_bold.nii.gz",
"/path/sub-01_task-rest_run-03_bold.nii.gz",
"/path/sub-01_task-rest_run-01_bold.nii.gz",
"/path/sub-01_task-axcpt_run-02_bold.nii.gz",
"/path/sub-01_task-rest_run-02_bold.nii.gz",
"/path/sub-01_task-axcpt_run-01_bold.nii.gz",
]
grouped_files = xbids.group_across_runs(in_files)
assert isinstance(grouped_files, list)
assert len(grouped_files[0]) == 3
assert grouped_files[0] == [
"/path/sub-01_task-axcpt_run-01_bold.nii.gz",
"/path/sub-01_task-axcpt_run-02_bold.nii.gz",
"/path/sub-01_task-axcpt_run-03_bold.nii.gz",
]
assert len(grouped_files[1]) == 3
assert grouped_files[1] == [
"/path/sub-01_task-rest_run-01_bold.nii.gz",
"/path/sub-01_task-rest_run-02_bold.nii.gz",
"/path/sub-01_task-rest_run-03_bold.nii.gz",
]

in_files = [
"/path/sub-01_task-rest_dir-LR_run-2_bold.nii.gz",
"/path/sub-01_task-rest_dir-RL_run-1_bold.nii.gz",
"/path/sub-01_task-axcpt_dir-LR_bold.nii.gz",
"/path/sub-01_task-rest_dir-RL_run-2_bold.nii.gz",
"/path/sub-01_task-rest_dir-LR_run-1_bold.nii.gz",
"/path/sub-01_task-axcpt_dir-RL_bold.nii.gz",
]
grouped_files = xbids.group_across_runs(in_files)
assert isinstance(grouped_files, list)
assert len(grouped_files[0]) == 2
assert grouped_files[0] == [
"/path/sub-01_task-axcpt_dir-LR_bold.nii.gz",
"/path/sub-01_task-axcpt_dir-RL_bold.nii.gz",
]
assert len(grouped_files[1]) == 4
assert grouped_files[1] == [
"/path/sub-01_task-rest_dir-LR_run-1_bold.nii.gz",
"/path/sub-01_task-rest_dir-RL_run-1_bold.nii.gz",
"/path/sub-01_task-rest_dir-LR_run-2_bold.nii.gz",
"/path/sub-01_task-rest_dir-RL_run-2_bold.nii.gz",
]
Loading

0 comments on commit 5dfcf2e

Please sign in to comment.