diff --git a/docs/imgs/code_org.png b/docs/imgs/code_org.png index a550f39e..267c208d 100644 Binary files a/docs/imgs/code_org.png and b/docs/imgs/code_org.png differ diff --git a/docs/imgs/data_org.jpg b/docs/imgs/data_org.jpg new file mode 100644 index 00000000..b8430f07 Binary files /dev/null and b/docs/imgs/data_org.jpg differ diff --git a/docs/imgs/data_org.png b/docs/imgs/data_org.png deleted file mode 100644 index 267c208d..00000000 Binary files a/docs/imgs/data_org.png and /dev/null differ diff --git a/docs/imgs/digest.png b/docs/imgs/digest.png new file mode 100644 index 00000000..c44b5fe4 Binary files /dev/null and b/docs/imgs/digest.png differ diff --git a/docs/nipoppy/code_org.md b/docs/nipoppy/code_org.md index 2523f815..765acbd0 100644 --- a/docs/nipoppy/code_org.md +++ b/docs/nipoppy/code_org.md @@ -8,7 +8,7 @@ The Nipoppy codebase is divided into data processing `workflows` and data availa **`workflow`** -- MRI data organization (`dicom_org` and `bids_conv`) +- MRI data organization ([`dicom_org`](./workflow/dicom_org.md) and [`bids_conv`](./workflow/bids_conv.md)) - Custom script to organize raw DICOMs (i.e. scanner output) into a flat participant-level directory. - Convert DICOMs into BIDS using [Heudiconv](https://heudiconv.readthedocs.io/en/latest/) - MRI data processing (`proc_pipe`) diff --git a/docs/nipoppy/configs.md b/docs/nipoppy/configs.md index b1ce8918..f71c7b86 100644 --- a/docs/nipoppy/configs.md +++ b/docs/nipoppy/configs.md @@ -22,6 +22,10 @@ Nipoppy requires two global files for specifying local data/container paths and - Information about tabular data (`TABULAR`) - Version and path to the data dictionary (`data_dictionary`) +!!! Note + + Nipoppy uses the term "session" to refer to a session ID string with the "ses-" prefix. For example, `ses-01` is a session, and `01` is the session ID associated with this session. + !!! Suggestion Although not mandatory, for consistency the preferred location would be: `/proc/global_configs.json`. @@ -73,9 +77,11 @@ Nipoppy requires two global files for specifying local data/container paths and ### Participant manifest: `manifest.csv` - This list serves as the **ground truth** for subject and visit (i.e. session) availability - - Create the `manifest.csv` in `/tabular/` comprising following columns + - Create the `manifest.csv` in `/tabular/` comprising following columns: - `participant_id`: ID assigned during recruitment (at times used interchangeably with `subject_id`) - - `visit`: label to denote participant visit for data acquisition (e.g. `"baseline"`, `"m12"`, `"m24"` or `"V01"`, `"V02"` etc.) + - `visit`: label to denote participant visit for data acquisition + - ***Note***: we recommend that visits describe a timeline if possible, for example `BL`, `M12`, `M24` (for Baseline, Month 12, and Month 24 respectively). + - Alternatively, visits should be ordinal and ideally named with the `V` prefix (e.g., `V01`, `V02`) - `session`: alternative naming for visit - typically used for imaging data to comply with [BIDS standard](https://bids-specification.readthedocs.io/en/stable/02-common-principles.html) - `datatype`: a list of acquired imaging datatype as defined by [BIDS standard](https://bids-specification.readthedocs.io/en/stable/02-common-principles.html) - New participant are appended upon recruitment as new rows diff --git a/docs/nipoppy/data_org.md b/docs/nipoppy/data_org.md index 986949b9..61f87a6e 100644 --- a/docs/nipoppy/data_org.md +++ b/docs/nipoppy/data_org.md @@ -21,4 +21,4 @@ Directories: - `backups`: data backup space (tars) - `releases`: data releases (symlinks) -![data_org](../imgs/data_org.png) \ No newline at end of file +![data_org](../imgs/data_org.jpg) \ No newline at end of file diff --git a/docs/nipoppy/glossary.md b/docs/nipoppy/glossary.md new file mode 100644 index 00000000..cc3a3e9d --- /dev/null +++ b/docs/nipoppy/glossary.md @@ -0,0 +1,52 @@ +## Glossary + +This page lists some definitions for important/recurring terms used in the Nipoppy framework. + +### `participant_id` + +**Appears in**: `manifest.csv`, `doughnut.csv` + +: Unique identifier for the participant (i.e., subject ID), as provided by the study. + +### `datatype` + +**Appears in**: `manifest.csv` + +: A BIDS-compliant "data type" value (see the [BIDS specification website](https://bids-specification.readthedocs.io/en/stable/common-principles.html#definitions) for a comprehensive list). The most common data types for magnetic resonance imaging (MRI) data are `"anat"`, `"func"`, and `"dwi"`. + +### `visit` + +**Appears in**: `manifest.csv` + +: An identifier for a data collection event, not restricted to imaging data. + +See also: [`session` vs `visit`](#session-vs-visit) + +### `session` + +**Appears in**: `manifest.csv`, `doughnut.csv` + +: A BIDS-compliant session identifier. Consists of the `"ses-"` prefix followed by the [`session_id`](#session_id). + +#### [`session`](#session) vs [`visit`](#visit) + +Nipoppy uses `session` for imaging data, following the convention established by BIDS. The term `visit`, on the other hand, is used to refer to any data collection event (not necessarily imaging-related). In most cases, `session` and `visit` will be identical (or `session`s will be a subset of `visit`s). However, having two descriptors becomes particularly useful when imaging and non-imaging assessments do not use the same naming conventions. + +### `participant_dicom_dir` + +**Appears in**: `doughnut.csv` + +: The name of the directory in which the raw DICOM data (before the DICOM organization step) are found. Usually, this is the same as [`participant_id`](#participant_id), but depending on the study it could be different. + +### `dicom_id` + +**Appears in**: `doughnut.csv` + +: The [`participant_id`](#participant_id), stripped of any non-alphanumerical character. For studies that do not use non-alphanumerical characters in their participant IDs, this is exactly the same as [`participant_id`](#participant_id). + +### `bids_id` + +**Appears in**: `doughnut.csv` + +: A BIDS-compliant participant identifier. Obtained by adding the `"sub-"` prefix to the [`dicom_id`](#dicom_id), which itself is derived from the [`participant_id`](#participant_id). A participant's raw BIDS data and derived imaging data are stored in directories named after their `bids_id`. + diff --git a/docs/nipoppy/installation.md b/docs/nipoppy/installation.md index 0b5b535e..f93d746c 100644 --- a/docs/nipoppy/installation.md +++ b/docs/nipoppy/installation.md @@ -9,20 +9,21 @@ The Nipoppy workflow comprises a Nipoppy codebase that operates on a Nipoppy dat ### Nipoppy code+env installation 1. Change directory to where you want to clone this repo, e.g.: `cd /home//projects//code/` 2. Create a new [venv](https://realpython.com/python-virtual-environments-a-primer/): `python3 -m venv nipoppy_env` - * Alternatively (if using [Anaconda/Miniconda](https://www.anaconda.com/)), create a `conda` environment: `conda create --name nipoppy_env python=3.9` -3. Activate your env: `source nipoppy_env/bin/activate` - * If using Anaconda/Miniconda: `conda activate nipoppy_env` + * Alternatively (if using [Anaconda/Miniconda](https://www.anaconda.com/)), create a `conda` environment: `conda create --name nipoppy_env python=3.9` +3. Activate your env: `source nipoppy_env/bin/activate` + * If using Anaconda/Miniconda: `conda activate nipoppy_env` 4. Clone this repo: `git clone https://github.com/neurodatascience/nipoppy.git` -5. Change directory to `nipoppy` -6. Install python dependencies: `pip install -e .` +5. Change directory to `nipoppy` +6. Install python dependencies: `pip install -e .` ### Nipoppy dataset directory setup -Run `tree.py` to create the Nipoppy dataset directory tree: +Run [`nipoppy/tree.py`](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/tree.py) to create the Nipoppy dataset directory tree: ```bash -python tree.py --nipoppy_root +python nipoppy/tree.py --nipoppy_root ``` -Where +Where: + - `DATASET_ROOT`: root (starting point) of the Nipoppy structured dataset !!! Suggestion diff --git a/docs/nipoppy/overview.md b/docs/nipoppy/overview.md index 62eb452b..f7804f14 100644 --- a/docs/nipoppy/overview.md +++ b/docs/nipoppy/overview.md @@ -1,6 +1,4 @@ -## What is Nipoppy (formerly mr_proc)? - -[*Process long and prosper*](https://en.wikipedia.org/wiki/Vulcan_salute) +## What is Nipoppy? [Nipoppy](https://github.com/neurodatascience/nipoppy) is a lightweight framework for analyzing (neuro)imaging and clinical data. It is designed to help users do the following: diff --git a/docs/nipoppy/trackers.md b/docs/nipoppy/trackers.md new file mode 100644 index 00000000..1eba3cb7 --- /dev/null +++ b/docs/nipoppy/trackers.md @@ -0,0 +1,66 @@ +## Track data availability status + +--- + +Trackers check the availability of files created during the dataset processing workflow (specifically the BIDS raw data and imaging pipeline derivatives) and assign an availability status (`SUCCESS`, `FAIL`, `INCOMPLETE` or `UNAVAILABLE`). + +--- + +### Key directories and files + +- `/bids` +- `/derivatives` +- `/derivatives/bagel.csv` + +### Running the tracker script + +The tracker uses the [`manifest.csv`](./configs.md#participant-manifest-manifestcsv) and [`doughnut.csv`](./workflow/dicom_org.md#procedure) files to determine the participant-session pairs to check. Each available tracker has an associated configuration file (typically called `_tracker.py`), where lists of expected paths for files produced by the pipeline are defined. + +For each participant-session pair being tracked, the tracker outputs a `"pipeline_complete"` status. Depending on the configuration for that particular pipeline, the tracker might also output phase and/or stage statuses (e.g., `"PHASE__func"`), which typically refer to sub-pipelines within the full pipeline that may or may not have been run during processing, depending on the input data and/or processing parameters. + +The tracker script updates the tabular `/derivatives/bagel.csv` file (see the [Understanding the `bagel.csv` output](#understanding-the-bagelcsv-output) for more information). + +> Sample command: +```bash +python nipoppy/trackers/run_tracker.py \ + --global_config + --dash_schema nipoppy/trackers/bagel_schema.json + --pipelines fmriprep mriqc tractoflow heudiconv +``` + +Notes: +- Currently available image processing pipelines are: `fmriprep`, `mriqc`, and `tractoflow`. See [Adding a tracker](#adding-a-tracker) for the steps to add a new tracker. +- Use `--pipelines heudiconv` for tracking BIDS data availability +- An optional `--session_id` parameter can be specified to only track a specific session. By default, the trackers are run for all sessions. +- Other optional arguments include `--run_id` and `--acq_label`, to help generate expected file paths for BIDS Apps. + +### Understanding the `bagel.csv` output + +A JSON schema for the `bagel.csv` file produced by the tracker script is available [here](https://github.com/neurobagel/digest/blob/main/schemas/bagel_schema.json). + +Here is an example of a `bagel.csv` file: + +| bids_id | participant_id | session | has_mri_data | pipeline_name | pipeline_version | pipeline_starttime | pipeline_complete | +| ------- | -------------- | ------- | ------------ | ------------- | ---------------- | ------------------ | ----------------- | +| sub-MNI001 | MNI001 | 1 | TRUE | freesurfer | 6.0.1 | 2022-05-24 13:43 | SUCCESS | +| sub-MNI001 | MNI001 | 2 | TRUE | freesurfer | 6.0.1 | 2022-05-24 13:46 | SUCCESS | +| sub-MNI001 | MNI001 | 3 | TRUE | freesurfer | 6.0.1 | UNAVAILABLE | INCOMPLETE | + +The imaging derivatives bagel has one row for each participant-session-pipeline combination. The pipeline status columns are `"pipeline_complete"`, and any column whose name begins by `"PHASE__"` or `"STAGE__"`. The possible values for these columns are: +- `"SUCCESS"`: All expected pipeline output files (as configured by pipeline tracker) are present. +- `"FAIL"`: At least one expected pipeline output is missing. +- `"INCOMPLETE"`: Pipeline has not been run for the subject session (output directory missing). +- `"UNAVAILABLE"`: Relevant MRI modality for pipeline not available for subject session (determined by the `datatype` column in the dataset's manifest file). + +### Adding a tracker + +1. Create a new file in `nipoppy/trackers` called `_tracker.py`. +2. Define a config dictionary `tracker_configs`, with a mandatory key `"pipeline_complete"` whose value is a function that takes as input the path to the subject result directory, as well as the session and run IDs, and outputs one of `"SUCCESS"`, `"FAIL"`, `"INCOMPLETE"`, or `"UNAVAILABLE"`. See the built-in [fMRIPrep tracker](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/trackers/fmriprep_tracker.py) for an example. +3. Optionally add additional stages and phases to track. Again, refer to the [fMRIPrep tracker](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/trackers/fmriprep_tracker.py) for to any other pre-defined tracker configuration for an example. +4. Modify `nipoppy/trackers/run_tracker.py` to add the new tracker as an option. + +### Visualizing availability status with the Neurobagel [`digest`](https://digest.neurobagel.org/) + +The `bagel.csv` file written by the tracker can be uploaded to [https://digest.neurobagel.org/](https://digest.neurobagel.org/) (as an "imaging CSV file") for interactive visualizations of processing status. + +![digest](../imgs/digest.png) diff --git a/docs/nipoppy/workflow/bids_conv.md b/docs/nipoppy/workflow/bids_conv.md index d71cec77..1e959bca 100644 --- a/docs/nipoppy/workflow/bids_conv.md +++ b/docs/nipoppy/workflow/bids_conv.md @@ -17,17 +17,17 @@ Convert DICOMs to BIDS using [Heudiconv](https://heudiconv.readthedocs.io/en/lat ### Procedure 1. Ensure you have the appropriate HeuDiConv container listed in your `global_configs.json` -2. Use [run_bids_conv.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/bids_conv/run_bids_conv.py) to run HeuDiConv `stage_1` and `stage_2`. +2. Use [nipoppy/workflow/bids_conv/run_bids_conv.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/bids_conv/run_bids_conv.py) to run HeuDiConv `stage_1` and `stage_2`. - Run `stage_1` to generate a list of available protocols from the DICOM header. These protocols are listed in `/bids/.heudiconv//info/dicominfo_ses-.tsv` > Sample cmd: ```bash -python run_bids_conv.py \ +python nipoppy/workflow/bids_conv/run_bids_conv.py \ --global_config \ --session_id \ --stage 1 ``` - + !!! note If participants have multiple sessions (or visits), these need to be converted separately and combined post-hoc to avoid Heudiconv errors. @@ -43,7 +43,7 @@ python run_bids_conv.py \ > Sample cmd: ```bash -python run_bids_conv.py \ +python nipoppy/workflow/bids_conv/run_bids_conv.py \ --global_config \ --session_id \ --stage 2 @@ -52,4 +52,4 @@ python run_bids_conv.py \ !!! note - Once `heuristic.py` is finalized, only `stage_2` needs to be run peridodically unless new scan protocol is added. + Once `heuristic.py` is finalized, only `stage_2` needs to be run periodically unless new scan protocol is added. diff --git a/docs/nipoppy/workflow/dicom_org.md b/docs/nipoppy/workflow/dicom_org.md index 14d48b2f..b3f6d0e6 100644 --- a/docs/nipoppy/workflow/dicom_org.md +++ b/docs/nipoppy/workflow/dicom_org.md @@ -2,7 +2,7 @@ --- -This is a dataset specific process and needs to be customized based on local scanner DICOM dumps and file naming. This organization should produce, for a given session, participant specific dicom dirs. Each of these participant-dir contains a flat list of dicoms for the participant for all available imaging modalities and scan protocols. The manifest is used to determine which new subject-session pairs need to be processed, and a `doughnut.csv` file is used to track the status for the DICOM reorganization and BIDS conversion steps. +This is a dataset-specific process and needs to be customized based on local scanner DICOM dumps and file naming. This organization should produce, for a given session, participant specific dicom dirs. Each of these participant-dir contains a flat list of dicoms for the participant for all available imaging modalities and scan protocols. The manifest is used to determine which new subject-session pairs need to be processed, and a `doughnut.csv` file is used to track the status for the DICOM reorganization and BIDS conversion steps. --- ### Key directories and files @@ -15,7 +15,7 @@ This is a dataset specific process and needs to be customized based on local sca ### Procedure -1. Run [`workflow/make_doughnut.py`](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/make_doughnut.py) to update `doughnut.csv` based on the manifest. It will add new rows for any subject-session pair not already in the file. +1. Run [`nipoppy/workflow/make_doughnut.py`](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/make_doughnut.py) to update `doughnut.csv` based on the manifest. It will add new rows for any subject-session pair not already in the file. - To create the `doughnut.csv` for the first time, use the `--empty` argument. If processing has been done without updating `doughnut.csv`, use `--regenerate` to update it based on new files in the dataset. !!! note @@ -42,18 +42,18 @@ This is a dataset specific process and needs to be customized based on local sca !!! note - It is **okay** for the participant directory to have messy internal subdir tree with DICOMs from multiple modalities. (See [data org schematic](data_org.md) for details). The run script will search and validate all available DICOM files automatically. + It is **okay** for the participant directory to have messy internal subdir tree with DICOMs from multiple modalities. (See [data org schematic](../../imgs/data_org.jpg) for details). The run script will search and validate all available DICOM files automatically. -4. Run [`run_dicom_org.py`](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/dicom_org/run_dicom_org.py) to: +4. Run [`nipoppy/workflow/dicom_org/run_dicom_org.py`](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/dicom_org/run_dicom_org.py) to: - Search: Find all the DICOMs inside the participant directory. - - Validate: Excludes certain individual dicom files that are invalid or contain scanner-derived data not compatible with BIDS conversion. - - Symlink (default) or copy: Creates symlinks from `raw_dicom/` to the `/dicom`, where all participant specific dicoms are in a flat list. The symlinks are relative so that they are preserved in containers. + - Validate: Excludes certain individual dicom files that are invalid or contain scanner-derived data not compatible with BIDS conversion. Enabled by default, disable by passing `--skip_dcm_check`. + - Symlink (default) or copy: Creates symlinks from `raw_dicom/` to the `/dicom`, where all participant specific dicoms are in a flat list. The symlinks are relative so that they are preserved in containers. Disable by passing `--no_symlink`. - Update status: if successful, set the `organized` column to `True` in `doughnut.csv`. > Sample cmd: ```bash -python run_dicom_org.py \ +python nipoppy/workflow/dicom_org/run_dicom_org.py \ --global_config \ --session_id \ ``` diff --git a/docs/nipoppy/workflow/proc_pipe/fmriprep.md b/docs/nipoppy/workflow/proc_pipe/fmriprep.md index 68b2697b..f8cd07a5 100644 --- a/docs/nipoppy/workflow/proc_pipe/fmriprep.md +++ b/docs/nipoppy/workflow/proc_pipe/fmriprep.md @@ -1,8 +1,8 @@ -### Objective +## Objective --- -Run [fMRIPrep](https://fmriprep.org/en/stable/) pipeline on BIDS formatted dataset. Note that a standard fMRIPrep run also include FreeSurfer processing. +Run [fMRIPrep](https://fmriprep.org/en/stable/) pipeline on BIDS-formatted dataset. Note that a standard fMRIPrep run also includes FreeSurfer processing. --- @@ -16,7 +16,7 @@ Run [fMRIPrep](https://fmriprep.org/en/stable/) pipeline on BIDS formatted datas ### Procedure - Ensure you have the appropriate fMRIPrep container listed in your `global_configs.json` -- Use [run_fmriprep.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/proc_pipe/fmriprep/run_fmriprep.py) script to run fmriprep pipeline. +- Use [nipoppy/workflow/proc_pipe/fmriprep/run_fmriprep.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/proc_pipe/fmriprep/run_fmriprep.py) script to run the fMRIPrep pipeline. - You can run "anatomical only" workflow by adding `--anat_only` flag - (Optional) Copy+Rename [sample_bids_filter.json](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/proc_pipe/fmriprep/sample_bids_filter.json) to `bids_filter.json` in the code repo itself. Then edit `bids_filter.json` to filter certain modalities / acquisitions. This is common when you have multiple T1w acquisitions (e.g. Neuromelanin, SPIR etc.) for a given modality. @@ -35,7 +35,7 @@ Run [fMRIPrep](https://fmriprep.org/en/stable/) pipeline on BIDS formatted datas > Sample cmd: ```bash -python run_fmriprep.py \ +python nipoppy/workflow/proc_pipe/fmriprep/run_fmriprep.py \ --global_config \ --participant_id MNI01 \ --session_id 01 \ @@ -44,7 +44,7 @@ python run_fmriprep.py \ !!! note - Unlike DICOM and BIDS run scripts, `run_fmriprep.py` can only process 1 participant at a time due to heavy compute requirements of fMRIPrep. For parallel processing on a cluster, sample HPC job scripts (slurm and sge) are provided in [hpc](https://github.com/neurodatascience/nipoppy/tree/main/workflow/proc_pipe/fmriprep/scripts) subdir. + Unlike DICOM and BIDS run scripts, `run_fmriprep.py` can only process 1 participant at a time due to heavy compute requirements of fMRIPrep. For parallel processing on a cluster, sample HPC job scripts (Slurm and SGE) are provided in the [hpc](https://github.com/neurodatascience/nipoppy/tree/main/workflow/proc_pipe/fmriprep/scripts) subdirectory. !!! note @@ -57,7 +57,7 @@ python run_fmriprep.py \ ### fMRIPrep tasks - - Main MR processing tasks run by fmriprep (see [fMRIPrep](https://fmriprep.org/en/stable/) for details): + - Main MR processing tasks run by fMRIPrep (see [fMRIPrep documentation](https://fmriprep.org/en/stable/) for details): - Preprocessing - Bias correction / Intensity normalization (N4) - Brain extraction (ANTs) diff --git a/docs/nipoppy/workflow/proc_pipe/mriqc.md b/docs/nipoppy/workflow/proc_pipe/mriqc.md index 1ed6c1fa..e394a9e6 100644 --- a/docs/nipoppy/workflow/proc_pipe/mriqc.md +++ b/docs/nipoppy/workflow/proc_pipe/mriqc.md @@ -1,4 +1,4 @@ -### MRIQC image processing pipeline +## MRIQC image processing pipeline --- @@ -8,46 +8,18 @@ MRIQC processes the participants and produces image quality metrics from T1w, T2 ### [MRIQC](https://mriqc.readthedocs.io/en/latest/) -- Use [run_mriqc.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/proc_pipe/mriqc/run_mriqc.py) to run MRIQC pipeline directly or wrap the script in an SGE/Slurm script to run on cluster +- Use [nipoppy/workflow/proc_pipe/mriqc/run_mriqc.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/proc_pipe/mriqc/run_mriqc.py) to run MRIQC pipeline directly or wrap the script in an SGE/Slurm script to run on cluster ```bash -python run_mriqc.py --global_config CONFIG.JSON --subject_id 001 --output_dir OUTPUT_DIR_PATH +python nipoppy/workflow/proc_pipe/mriqc/run_mriqc.py \ + --global_config \ + --participant_id \ + --session_id \ + --modalities \ ``` -- Mandatory: Pass in the absolute path to the configuration containing the MRIQC container and data directory to `global_config` -- Mandatory: Pass in the subject id to `participant_id` -- Mandatory: Pass in the subject id to `session_id` -- Mandatory: Pass in the absolute path to the output directory to `output_dir` - -!!! note - An example config is located [here](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/sample_global_configs.json) - -> Sample cmd: -```bash -python run_mriqc.py \ - --global_config GLOBAL_CONFIG \ - --participant_id SUBJECT_ID \ - --output_dir OUTPUT_DIR \ - --session_id SESSION_ID -``` - -!!! note - A run for a participant is considered successful when the participant's log file reads `Participant level finished successfully` - -### Evaluate MRIQC Results -- Use [mriqc_tracker.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/trackers/mriqc_tracker.py) to determine how many subjects successfully passed through the MRIQC pipeline - - Mandatory: Pass in the subject directory as an argument -- After a successful run of the script, a dictionary called tracker_configs is returned contained whether the subject passed through the pipeline successfully - -!!! note - Multiple sessions can be evaluated, but each session will require a new job running this script - -> Sample cmd: -```pycon ->>> results = {"pipeline_complete': mriqc_tracker.eval_mriqc(subject_dir, session_id)} ->>> results - SUCCESS ->>> results = {"MRIQC_BOLD': mriqc_tracker.check_bold(subject_dir, session_id)} ->>> results - FAIL -``` +The required arguments are: +- `--global_config`: path to the configuration containing the MRIQC container and data directory +- `--participant_id`: participant/subject ID +- `--session_id`: session ID +- `--modality`: modality/modalities to check (valid values: `T1w`, `T2w`, `bold`, `dwi`) diff --git a/mkdocs.yml b/mkdocs.yml index 952339fd..48687e8e 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -45,9 +45,11 @@ nav: - Workflow: - "DICOM organization": "nipoppy/workflow/dicom_org.md" - "BIDS conversion": "nipoppy/workflow/bids_conv.md" - - Pipeline specific instructions: + - Pipeline-specific instructions: - fmriprep: "nipoppy/workflow/proc_pipe/fmriprep.md" - mriqc: "nipoppy/workflow/proc_pipe/mriqc.md" + - Trackers: "nipoppy/trackers.md" + - Glossary: "nipoppy/glossary.md" - Contributing: - Pull requests: "contributing/pull_requests.md" - Our team: "contributing/team.md"