diff --git a/docs/bids.md b/docs/bids.md index 6e8f693..d472ba4 100644 --- a/docs/bids.md +++ b/docs/bids.md @@ -2,7 +2,7 @@ ## CuBIDS (Curation of BIDS) -Used to make sure the data is BIDS valid (utilizing the [bids-validator package 1.7.2](https://cubids.readthedocs.io/en/latest/installation.html#:~:text=Now%20that%20we,%24), which will already be installed on MSI) and to ensure all of the acquisition parameters of the data are what you expect them to be. If you are running CuBIDS on data that has more than ten subjects, then use an [srun ](slurm-params.md#srun)or an [sbatch](slurm-params.md#sbatch). +Used to make sure the data is BIDS valid (utilizing the [bids-validator package 1.7.2](https://cubids.readthedocs.io/en/latest/installation.html#:~:text=Now%20that%20we,%24), which will already be installed on MSI) and to ensure all of the acquisition parameters of the data are what you expect them to be. If you are running CuBIDS on data that has more than ten subjects, then use an [srun](slurm-params.md#srun) or an [sbatch](slurm-params.md#sbatch). 14. Load CuBIDS environment: @@ -10,9 +10,9 @@ Used to make sure the data is BIDS valid (utilizing the [bids-validator package `source /home/faird/shared/code/external/envs/miniconda3/load_miniconda3.sh` - - This ensures that you have all of the proper packages installed to run cuBIDS. + - This ensures that you have all of the proper packages installed to run CuBIDS. - - Activate cuBIDS environment + - Activate CuBIDS environment `conda activate cubids` diff --git a/docs/filemapper.md b/docs/filemapper.md index 9937599..438f3ba 100644 --- a/docs/filemapper.md +++ b/docs/filemapper.md @@ -1,6 +1,6 @@ # File-mapper -File mapper maps processed outputs into the BIDS format for the abcd-hcp and infant-abcd-bids pipelines. It is designed to help users copy, move, or symlink files from one directory to another using a JSON file as input. The JSON keys are the source of the files and the JSON values are the desired destination path. This program has been used to move important files from a large dataset and would be applicable in any BIDS datasets or other renaming/mapping utility. Another added benefit of file-mapper is that it trims down the amount of files if you delete the original source directory of file-mapper after running it. +File mapper maps processed outputs into the BIDS format for the abcd-hcp-pipeline and DCAN-infant pipelines. It is designed to help users copy, move, or symlink files from one directory to another using a JSON file as input. The JSON keys are the source of the files and the JSON values are the desired destination path. This program has been used to move important files from a large dataset and would be applicable in any BIDS datasets or other renaming/mapping utility. Another added benefit of file-mapper is that it trims down the amount of files if you delete the original source directory of file-mapper after running it. 1. File-mapper usage: @@ -10,7 +10,7 @@ File mapper maps processed outputs into the BIDS format for the abcd-hcp and inf * Run with the following command: ``` - python3 ./file_mapper_script.py -a copy -sp [full output directory of a single subject down to /file] -dp [output dir] -t SUBJECT=[part after “sub-”],SESSION=[part after “ses-”],PIPELINE=[folder name (e.g. abcd-bids)] + python3 ./file_mapper_script.py -a copy -sp [full output directory of a single subject down to /file] -dp [output dir] -t SUBJECT=[part after “sub-”],SESSION=[part after “ses-”],PIPELINE=[folder name (e.g. ABCD-BIDS)] ``` * `PIPELINE` refers to the directory your outputs are in which are inside the derivatives folder. diff --git a/docs/index.md b/docs/index.md index c60d8fb..c5f494e 100644 --- a/docs/index.md +++ b/docs/index.md @@ -21,11 +21,11 @@ This handbook outlines key tools and resources used by DCAN Labs. Topics covered - BIDS: - Anything to convert or validate BIDS data - Before Processing: - - Any steps the lab does before running processing (ABCD-BIDS, fMRIprep, Nibabies, etc) + - Any steps the lab does before running processing (ABCD-BIDS, fMRIPrep, NiBabies, etc) - Processing: - - Running data processing pipelines on MSI (ABCD-BIDS, fMRIprep, Nibabies, etc) + - Running data processing pipelines on MSI (ABCD-BIDS, fMRIPrep, NiBabies, etc) - After Processing: - - Any steps the lab does after running processing (ABCD-BIDS, fMRIprep, Nibabies, etc) + - Any steps the lab does after running processing (ABCD-BIDS, fMRIPrep, NiBabies, etc) - Troubleshooting: - Notes on troubleshooting the labs codebases - Quality Control: diff --git a/docs/infant-doc.md b/docs/infant-doc.md index 521b9f8..6f882b0 100644 --- a/docs/infant-doc.md +++ b/docs/infant-doc.md @@ -190,7 +190,7 @@ Default option: use auto-detection ``` -Distortion correction is performed on BOLD images during fMRIVolume using the script `[DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh](https://github.com/DCAN-Labs/dcan-infant-pipeline/blob/master/fMRIVolume/scripts/DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh)`. BOLD data is typically collected from anterior to posterior, causing the BOLD images to be geometrically distorted in the same direction. Distortion correction is typically performed by using either BOLD data acquired in the reverse direction or field maps also acquired in the reverse direction. Not all data has field maps, but if they do, they are located under the `fmaps` directory in a subject’s raw NIFTIs. +Distortion correction is performed on BOLD images during fMRIVolume using the script `[DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh](https://github.com/DCAN-Labs/dcan-infant-pipeline/blob/master/fMRIVolume/scripts/DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh)`. BOLD data is typically collected from anterior to posterior, causing the BOLD images to be geometrically distorted in the same direction. Distortion correction is typically performed by using either BOLD data acquired in the reverse direction or field maps also acquired in the reverse direction. Not all data has field maps, but if they do, they are located under the `fmaps` directory in a subject’s raw NIfTIs. The default option is for the pipeline to autodetect which method to use. If these are present, the pipeline will either use TOPUP or FIELDMAP methods for DC. If fieldmaps are not present, the pipeline will use the T2_DC method. The pipeline will only skip DC if the flag option is set to NONE. diff --git a/docs/pipelines.md b/docs/pipelines.md index 09211ff..855c419 100644 --- a/docs/pipelines.md +++ b/docs/pipelines.md @@ -7,13 +7,13 @@ This page has the recommended flags and example commands for the major pipelines Infants - Nibabies, DCAN Infant Pipeline, XCPD + nibabies, DCAN Infant Pipeline, XCPD Children (4 years and older) and Adults - Fmriprep, abcd-bids pipeline, XCPD + fMRIPrep, ABCD-BIDS pipeline, XCPD @@ -21,7 +21,7 @@ This page has the recommended flags and example commands for the major pipelines Note: the cutoff age for using an adult pipeline depends on the scope of the project -## 1. fMRIprep +## 1. fMRIPrep Read: [fMRIPrep ReadTheDocs site](https://fmriprep.org/en/stable/) @@ -77,7 +77,7 @@ ${singularity} run --cleanenv \ Read: [NiBabies ReadTheDocs site](https://nibabies.readthedocs.io/en/latest/) -Nibabies is a robust pre-processing MRI and fMRI workflow that is also a part of the NiPreps family. NiBabies is designed and optimized for human infants between 0-2 years old. For more information on Nibabies code, see [here](https://github.com/nipreps/nibabies). For in-depth usage information, one can utilize [this google doc](https://docs.google.com/document/d/1PW8m1tWWqqgKCAJ9XLpJ5tycPB5gqFodrnvNOIavTAA/edit) or see the [Read the Docs here](https://nibabies.readthedocs.io/en/latest/installation.html). +NiBabies is a robust pre-processing MRI and fMRI workflow that is also a part of the NiPreps family. NiBabies is designed and optimized for human infants between 0-2 years old. For more information on NiBabies code, see [here](https://github.com/nipreps/nibabies). For in-depth usage information, one can utilize [this google doc](https://docs.google.com/document/d/1PW8m1tWWqqgKCAJ9XLpJ5tycPB5gqFodrnvNOIavTAA/edit) or see the [Read the Docs here](https://nibabies.readthedocs.io/en/latest/installation.html). 62. Preferred flags: @@ -88,7 +88,7 @@ Nibabies is a robust pre-processing MRI and fMRI workflow that is also a part of - `--session-id \`: when running a subject with multiple sessions, need to specify which session is being processed as well as the age - - `--derivatives /derivatives \` : Nibabies will use a segmentation from the segmentation pipeline (pre-postBIBSnet). This flag is used to clarify that the precomputed segmentation directory is being utilized. + - `--derivatives /derivatives \` : NiBabies will use a segmentation from the segmentation pipeline (pre-postBIBSnet). This flag is used to clarify that the precomputed segmentation directory is being utilized. - `--cifti-output 91k \` : Possible choices: 91k, 170k. Output preprocessed BOLD as a CIFTI-dense time series. Optionally, the number of grayordinate can be specified (default is 91k, which equates to 2mm resolution). Default: False @@ -231,7 +231,7 @@ The XCP-D workflow takes fMRIPRep, NiBabies, DCAN and HCP outputs in the form of * `-f 0.3` : framewise displacement threshold for censoring.** Here,** **0.3 is preferred versus a stricter threshold to avoid excluding too much data from the regression model. Stricter thresholding can still be applied when running your analyses on the XCP-D output. ** - * `--cifti` : postprocess cifti instead of nifti; this is set default for dcan and hcp input + * `--cifti` : postprocess cifti instead of NIfTI; this is set default for dcan and hcp input * **[for version 0.2.0 and newer, and “develop” / “unstable” builds dated 10-21-2022 or newer]** `--warp-surfaces-native2std` : resample surfaces to fsLR space, 32k density, and apply the transform from native T1w to MNI152NLin6Asym space output by fMRIPrep / NiBabies @@ -245,7 +245,7 @@ The XCP-D workflow takes fMRIPRep, NiBabies, DCAN and HCP outputs in the form of * `--omp-threads 3` : maximum number of threads per-process - * `--despike` : despike the nifti/cifti before postprocessing + * `--despike` : despike the NIfTI/cifti before postprocessing * `-w /work` : used to specify a working directory within the container’s filesystem, named _/work_. diff --git a/docs/precision.md b/docs/precision.md index e143635..0e8b8cf 100644 --- a/docs/precision.md +++ b/docs/precision.md @@ -4,7 +4,7 @@ Read:_ [Rapid Precision Functional Mapping of Individuals Using Multi-Echo fMRI **Parcellated functional connectivity (FC)** -**_Time x FC reliability curve._** The script at <location> generates a reliability curve from parcellated BOLD timeseries files (.ptseries.nii) in an XCP-D derivatives directory. It is a Slurm sbatch script which requires a list of positional arguments to run. +**_Time x FC reliability curve._** The script at <location> generates a reliability curve from parcellated BOLD timeseries files (`.ptseries.nii`) in an XCP-D derivatives directory. It is a Slurm sbatch script which requires a list of positional arguments to run. The general form of the run command is diff --git a/docs/preprocessing.md b/docs/preprocessing.md index 96bfa50..7ef36c4 100644 --- a/docs/preprocessing.md +++ b/docs/preprocessing.md @@ -2,9 +2,9 @@ For this pipeline, the data first needs to be converted and properly orientated before being ran. -1. Get DICOMS from s3 bucket (`s3://zlab-nhp`) +1. Get DICOMs from s3 bucket (`s3://zlab-nhp`) -1. Convert DICOMS to BIDS and apply NORDIC +1. Convert DICOMs to BIDS and apply NORDIC - Use the [Dcm2bids3 NORDIC wrapper](nordic.md) -- needs a pair of 10.5T-specific config files to run, and use `--keep-non-nordic` when calling `nordicsbatch.sh` in the post-op command - Confirm the number of noise volumes per run-- for the Z-Lab 10.5T data, this is usually 5 diff --git a/docs/processing-sop.md b/docs/processing-sop.md index 18b7cdd..9d2958d 100644 --- a/docs/processing-sop.md +++ b/docs/processing-sop.md @@ -13,7 +13,7 @@ Below are steps that should be followed in order to ensure your processing effor 4. Once your data is on the MSI, determine if has been (properly) converted to BIDS. - Even if the person that provided the data to you says it has been successfully converted to BIDS, you should run [CuBIDS](bids.md) on the dataset. - If you're starting with DICOMs, see [here](dcm2bids.md) for BIDS conversion tips. - - If you have NIFTI files that are not BIDS-compliant, you will more than likely have to write a script to finish the conversion. + - If you have NIfTI files that are not BIDS-compliant, you will more than likely have to write a script to finish the conversion. 5. Create a working directory in the project folder for the share you intend to work on. This is where you will put your job wrappers, logs, and status updates. Make sure to name the folder intelligently based on the study and codebase you are running. - Make sure to check the `groupquota` to ensure there's room for your process. ```groupquota -g share_name``` diff --git a/docs/slurm-params.md b/docs/slurm-params.md index 4e82ab2..20fd94a 100644 --- a/docs/slurm-params.md +++ b/docs/slurm-params.md @@ -45,7 +45,7 @@ The job script needs to stay the same until the job starts to produce the correc 1. The input and output directory paths are bound to the container’s filesystem using the `-B` syntax. There is also a path to a `.sif` file that will utilize the jobs resources in order to produce the desired outputs using what's in the input directory. This .sif file is a[ singularity image ](containers.md)that is being run on the specified input files. The input path is described above as `/home/faird/shared/projects/teaching/practicals/experiments/dcm2bids_output2/derivatives/nibabies/`. The output path is `/home/faird/shared/projects/teaching/practicals/experiments/dcm2bids_output2/derivatives/XCPD/` and the path to the singularity image is `/home/faird/shared/code/external/pipelines/ABCD-XCP/xcp-d_unstable04152022a.sif` * Note: these input, output and singularity image paths need to exist prior to running the sbatch. 2. The pipeline specific flags `--cifti` `-m` `--participant-label` and `-w`. - * `--cifti` will postprocess cifti files instead of niftis + * `--cifti` will postprocess CIFTI files instead of NIfTIs * `-m` will combine all the runs into one file * `--participant-label` is a space delimited list of participant identifiers or a single identifier * `-w` is the path to where intermediate results should be stored. In the above sbatch, it is specified in the line that reads `-B /home/feczk001/shared/projects/Jacob_test/work/:/work \` and is called on again later in this line `-w /work \` \ No newline at end of file diff --git a/docs/slurm.md b/docs/slurm.md index a234696..e4f268b 100644 --- a/docs/slurm.md +++ b/docs/slurm.md @@ -130,9 +130,9 @@ Also available are various commands for job accounting, job management, and envi 4. Change job account (each PI in the lab has their own Slurm group account with its own allocation of resources and queue priority. It is sometimes useful to change accounts to distribute resource requests for large processing jobs, or when an account has low queue priority due to heavy usage) : `scontrol update JobId=#### Account=new_group` - An example command has _Job 234293_ originally submitted under default account _miran045_, and to change it to *feczk001* one would use: `scontrol update JobId=234293 Account=feczk001` -5. To change a slurm job partition use: `scontrol update JobId=#### Partition=new_partition` +5. To change a SLURM job partition use: `scontrol update JobId=#### Partition=new_partition` - An example command has _Job 234293_ originally submitted with the partition _msismall_, and to change it to *msigpu* one would use: `scontrol update JobId=234293 Partition=msigpu` -6. To change the amount of time a slurm job runs for, use: `scontrol update JobId=#### EndTime=HH:MM:SS` +6. To change the amount of time a SLURM job runs for, use: `scontrol update JobId=#### EndTime=HH:MM:SS` - To find time information, first use `scontrol show JobId=####` - An example command has _Job 234293_ originally submitted at the following time for 96 hours: _StartTime=2022-08-29T13:04:45_, and to change it to 48 hours one would use: `scontrol update JobId=234293 EndTime=2022-08-31T13:04:45` \ No newline at end of file diff --git a/docs/status.md b/docs/status.md index d917bda..a948e56 100644 --- a/docs/status.md +++ b/docs/status.md @@ -1,6 +1,6 @@ # Processing Status -Produce status html on the output data to see how many processing jobs succeeded and failed (only available for abcd-hcp and infant-abcd-bids pipelines). +Produce status html on the output data to see how many processing jobs succeeded and failed (only available for abcd-hcp-pipeline and infant-abcd-bids pipelines). 1. If your dataset is more than 10 subjects, start an interactive session first: `srun -N 1 --cpus-per-task=1 --mem-per-cpu=5gb -A feczk001 -t 6:00:00 -p interactive --x11 --pty bash` diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md index 1b63d55..1f8f139 100644 --- a/docs/troubleshooting.md +++ b/docs/troubleshooting.md @@ -3,7 +3,7 @@ For any processing failures, we triage what happened. Most troubleshooting processes will be needed for infant data, as their processing jobs are more prone to errors. When encountering a processing failure, first check to see if it is already documented in the following pipeline dependent links. It is also necessary and efficient to post your error and run command in #questions or the relevant channel on Slack. If you are unsure what your “relevant channel” is on Slack, ask your supervisor or just default to posting in #questions. -Image viewing is a necessary process to conclude if a pipeline job finished successfully, but it also may be necessary when a processing error has been encountered. There are a few different options for viewing anatomical images. It may be most efficient to first download the images locally and then use ITKsnap to view them. However, when working on MSI, applications such as `fslview_deprecated` or `fsleyes` can be used. For viewing functional images, `wb_view` is ideal (make sure to `module load fsl` or `module load workbench` before using these applications). For more information on nifti and cifti files (image file types), refer to the video sessions in the subfolders of [this google drive folder](https://drive.google.com/drive/u/0/folders/1yc3w2zNYVZQvTcCgxKk_j6ecZLoyWiCM). +Image viewing is a necessary process to conclude if a pipeline job finished successfully, but it also may be necessary when a processing error has been encountered. There are a few different options for viewing anatomical images. It may be most efficient to first download the images locally and then use ITKsnap to view them. However, when working on MSI, applications such as `fslview_deprecated` or `fsleyes` can be used. For viewing functional images, `wb_view` is ideal (make sure to `module load fsl` or `module load workbench` before using these applications). For more information on NIfTI and CIFTI files (image file types), refer to the video sessions in the subfolders of [this google drive folder](https://drive.google.com/drive/u/0/folders/1yc3w2zNYVZQvTcCgxKk_j6ecZLoyWiCM). ## DCAN Infant Pipeline (infant-abcd-bids-pipeline) @@ -14,7 +14,7 @@ Slurm logs are the stdout and stderr files from a Slurm job. The first thing to MSI outputs these into a single .out file by default, in the directory you called the submission script from. -Type `scontrol show jobid -dd <job id num> | grep Std` to see the paths for StdErr, StdIn (usually /dev/null), and StdOut for any slurm job currently in the queue. +Type `scontrol show jobid -dd <job id num> | grep Std` to see the paths for StdErr, StdIn (usually /dev/null), and StdOut for any SLURM job currently in the queue. Common errors include the following: @@ -29,12 +29,12 @@ Jobs (e.g. Slurm on MSI) will not start if you don’t have write access to wher The lab's scripts print the string `RUNNING DOCKER IMAGE` just before calling the pipeline. Before starting each stage, the pipeline prints a line that says `running `followed by the name of the stage. -If the job succeeded (i.e., the pipeline successfully ran all the way through all of the stages), the Slurm logs will have a message that contains `BEGINNING SUCCESS CLEANUP`. If the job failed, the slurm logs will have a message that contains `BEGINNING FAIL CLEANUP`. The most common failures are described here. +If the job succeeded (i.e., the pipeline successfully ran all the way through all of the stages), the Slurm logs will have a message that contains `BEGINNING SUCCESS CLEANUP`. If the job failed, the SLURM logs will have a message that contains `BEGINNING FAIL CLEANUP`. The most common failures are described here. #### Job Timed Out -If the slurm logs say the job caught an exit code of 140 or 240, the job timed out. Slurm jobs in the exacloud partition time out after 36 hours. (When using the lab's scripts, jobs are allowed to run for just 34 hours so that they have a good chance of copying all the job's data back to lustre1.) +If the SLURM logs say the job caught an exit code of 140 or 240, the job timed out. Slurm jobs in the exacloud partition time out after 36 hours. (When using the lab's scripts, jobs are allowed to run for just 34 hours so that they have a good chance of copying all the job's data back to lustre1.) Look higher in the output log to find the last stage that was started. Copy the name of the last stage exactly (case matters). Resubmit the job, starting with that stage. @@ -170,7 +170,7 @@ In the Infant section above, points 3-4 can be generalized for ABCD troubleshoot ![Example of wb_view](img/wb-view3.png) -## NiBabies and fMRIprep +## NiBabies and fMRIPrep For a comprehensive document on troubleshooting nibabies and fmriprep errors, [see here](https://docs.google.com/document/u/0/d/16qSEPV1_FHOHBq2eJOuZLqISv-0zCbpOJQ7HesEQCv4/edit).