The goal for this Neurohackademy 2019 project is to provide context for the image quality metrics (IQMs) shown in the MRIQC group reports, by showing the distribution of IQMs for your data plotted relative to a larger set of anonymized IQMs pulled from the web API.
As described in the MRIQC documentation, many of the IQMs calculated are "no-reference" metrics. "A no-reference IQM is a measurement of some aspect of the actual image which cannot be compared to a reference value for the metric since there is no ground-truth about what this number should be." [link] Therefore, it can be difficult for users to get a sense of how their data quality compares to normative data quality.
For example, in this dataset it's easy to see that there's one participant whose mean framewise displacement is much greater than the rest.
But in the dataset shown below, there are no obvious outliers. However, note that everyone in this sample has an undesirably high degree of motion (mean FD >2mm!)
Hopefully, you're actually paying attention to the Y-axis scale (if it's something like framewise displacement that has easily-interpretable units), but we designed mriqception to make it simpler to quickly spot problems like this. The plot below shows these first two example datasets relative to 10,000 datapoints from the web API:
From this figure, you can immediately see that the second example dataset falls significantly outside of the web API data distribution, indicating overall poorer data quality relative to other datasets processed by MRIQC.
mriqception takes user IQMs from MRIQC and plots them relative to IQMs pulled from the 200k+ images in MRIQC web API (we're going to call those "normative" IQMs). The user has the option to filter their API query by relevant acquisition parameters, such as tesla and TR/TE.
mriqception also features a brief description of the IQM underneath the plot. We have tried to make these descriptions as user-friendly as possible.
Like MRIQC, mriqception is descriptive rather than prescriptive.
Importantly, mriqception does not tell you whether your IQMs are "good" or not. It simply shows you how the IQMs from your sample compare to other users' data, as an additional decision-making tool for your QC process and as a way to quickly compare how image quality in your dataset compares to other datasets. What you do with that information is up to you!
- Open the Jupyter notebook:
$ jupyter notebook Presentation_Notebook.ipynb
- Change the filepath in line XX from
./test_data/group_{modality}.tsv
to the location of your MRIQC group TSV file. - Select one or more acquisition parameters by which you want to filter the web API query: currently supports TR, TE, and Tesla.
- Select whether you want your plot to include outliers in the API data. Lower outlier threshold is calculated as
25% quartile(data) - 1.5*IQR(data)
; upper outlier threshold calculated as75% quartile(data) + 1.5*IQR(data)
. Default is to include outliers. You can change this by changingoutliers = False
tooutliers = True
in the Jupyter notebook. The plots are interactive, so you can zoom in to rescale and more closely examine your data, if outliers in the API data are overly influencing the scale of the plot. - Select the IQM you want to examine from the dropdown menu.
- You must have run MRIQC on your data least at the group level, and generated group .TSV files for each modality (T1w, T2w, BOLD) you want to look at. These are named something like
group_t1w.tsv
and/orgroup_bold.tsv
, and should be located in<PATH TO YOUR BIDS DIRECTORY>/derivatives/mriqc/
Note that this project was developed based on output from MRIQC v0.15.2rc1, and if MRIQC changes the names of the iqms that it returns in the TSV, you may need to change the variable names intools/figs.py
- You must have the plotly and pandas libraries installed:
$ pip install pandas==0.25.0 plotly==4.0.0
L to R: Sofía Fernández-Lozano, Ayelet Gertsovski, Helena Gellersen, Chris Foulon, Estée Rubien-Thomas, Catherine Walsh, Stephanie DeCross, Saren Seeley, Damion Demeter, Elizabeth Beard