Skip to content

Commit

Permalink
Merge pull request #168 from DCAN-Labs/more_edits
Browse files Browse the repository at this point in the history
More edits
  • Loading branch information
rosemccollum authored Dec 21, 2023
2 parents 9ec99b3 + 1dc4b4d commit 6c6373c
Show file tree
Hide file tree
Showing 12 changed files with 29 additions and 29 deletions.
6 changes: 3 additions & 3 deletions docs/bids.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,17 @@

## CuBIDS (Curation of BIDS)

Used to make sure the data is BIDS valid (utilizing the [bids-validator package 1.7.2](https://cubids.readthedocs.io/en/latest/installation.html#:~:text=Now%20that%20we,%24), which will already be installed on MSI) and to ensure all of the acquisition parameters of the data are what you expect them to be. If you are running CuBIDS on data that has more than ten subjects, then use an [srun ](slurm-params.md#srun)or an [sbatch](slurm-params.md#sbatch).
Used to make sure the data is BIDS valid (utilizing the [bids-validator package 1.7.2](https://cubids.readthedocs.io/en/latest/installation.html#:~:text=Now%20that%20we,%24), which will already be installed on MSI) and to ensure all of the acquisition parameters of the data are what you expect them to be. If you are running CuBIDS on data that has more than ten subjects, then use an [srun](slurm-params.md#srun) or an [sbatch](slurm-params.md#sbatch).

14. Load CuBIDS environment:

- Load lab-wide miniconda3 environment:

`source /home/faird/shared/code/external/envs/miniconda3/load_miniconda3.sh`

- This ensures that you have all of the proper packages installed to run cuBIDS.
- This ensures that you have all of the proper packages installed to run CuBIDS.

- Activate cuBIDS environment
- Activate CuBIDS environment

`conda activate cubids`

Expand Down
4 changes: 2 additions & 2 deletions docs/filemapper.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# File-mapper

File mapper maps processed outputs into the BIDS format for the abcd-hcp and infant-abcd-bids pipelines. It is designed to help users copy, move, or symlink files from one directory to another using a JSON file as input. The JSON keys are the source of the files and the JSON values are the desired destination path. This program has been used to move important files from a large dataset and would be applicable in any BIDS datasets or other renaming/mapping utility. Another added benefit of file-mapper is that it trims down the amount of files if you delete the original source directory of file-mapper after running it.
File mapper maps processed outputs into the BIDS format for the abcd-hcp-pipeline and DCAN-infant pipelines. It is designed to help users copy, move, or symlink files from one directory to another using a JSON file as input. The JSON keys are the source of the files and the JSON values are the desired destination path. This program has been used to move important files from a large dataset and would be applicable in any BIDS datasets or other renaming/mapping utility. Another added benefit of file-mapper is that it trims down the amount of files if you delete the original source directory of file-mapper after running it.

1. File-mapper usage:

Expand All @@ -10,7 +10,7 @@ File mapper maps processed outputs into the BIDS format for the abcd-hcp and inf

* Run with the following command:
```
python3 ./file_mapper_script.py <selected json file> -a copy -sp [full output directory of a single subject down to /file] -dp [output dir] -t SUBJECT=[part after “sub-”],SESSION=[part after “ses-”],PIPELINE=[folder name (e.g. abcd-bids)]
python3 ./file_mapper_script.py <selected json file> -a copy -sp [full output directory of a single subject down to /file] -dp [output dir] -t SUBJECT=[part after “sub-”],SESSION=[part after “ses-”],PIPELINE=[folder name (e.g. ABCD-BIDS)]
```
* `PIPELINE` refers to the directory your outputs are in which are inside the derivatives folder.
Expand Down
6 changes: 3 additions & 3 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,11 @@ This handbook outlines key tools and resources used by DCAN Labs. Topics covered
- BIDS:
- Anything to convert or validate BIDS data
- Before Processing:
- Any steps the lab does before running processing (ABCD-BIDS, fMRIprep, Nibabies, etc)
- Any steps the lab does before running processing (ABCD-BIDS, fMRIPrep, NiBabies, etc)
- Processing:
- Running data processing pipelines on MSI (ABCD-BIDS, fMRIprep, Nibabies, etc)
- Running data processing pipelines on MSI (ABCD-BIDS, fMRIPrep, NiBabies, etc)
- After Processing:
- Any steps the lab does after running processing (ABCD-BIDS, fMRIprep, Nibabies, etc)
- Any steps the lab does after running processing (ABCD-BIDS, fMRIPrep, NiBabies, etc)
- Troubleshooting:
- Notes on troubleshooting the labs codebases
- Quality Control:
Expand Down
2 changes: 1 addition & 1 deletion docs/infant-doc.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ Default option: use auto-detection
```


Distortion correction is performed on BOLD images during fMRIVolume using the script `[DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh](https://github.com/DCAN-Labs/dcan-infant-pipeline/blob/master/fMRIVolume/scripts/DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh)`. BOLD data is typically collected from anterior to posterior, causing the BOLD images to be geometrically distorted in the same direction. Distortion correction is typically performed by using either BOLD data acquired in the reverse direction or field maps also acquired in the reverse direction. Not all data has field maps, but if they do, they are located under the `fmaps` directory in a subject’s raw NIFTIs.
Distortion correction is performed on BOLD images during fMRIVolume using the script `[DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh](https://github.com/DCAN-Labs/dcan-infant-pipeline/blob/master/fMRIVolume/scripts/DistortionCorrectionAndEPIToT1wReg_FLIRTBBRAndFreeSurferBBRbased.sh)`. BOLD data is typically collected from anterior to posterior, causing the BOLD images to be geometrically distorted in the same direction. Distortion correction is typically performed by using either BOLD data acquired in the reverse direction or field maps also acquired in the reverse direction. Not all data has field maps, but if they do, they are located under the `fmaps` directory in a subject’s raw NIfTIs.

The default option is for the pipeline to autodetect which method to use. If these are present, the pipeline will either use TOPUP or FIELDMAP methods for DC. If fieldmaps are not present, the pipeline will use the T2_DC method. The pipeline will only skip DC if the flag option is set to NONE.

Expand Down
14 changes: 7 additions & 7 deletions docs/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,21 +7,21 @@ This page has the recommended flags and example commands for the major pipelines
<td>
<strong>Infants</strong>
</td>
<td>Nibabies, DCAN Infant Pipeline, XCPD
<td>nibabies, DCAN Infant Pipeline, XCPD
</td>
</tr>
<tr>
<td><strong>Children (4 years and older) and Adults</strong>
</td>
<td>Fmriprep, abcd-bids pipeline, XCPD
<td>fMRIPrep, ABCD-BIDS pipeline, XCPD
</td>
</tr>
</table>

Note: the cutoff age for using an adult pipeline depends on the scope of the project


## 1. fMRIprep
## 1. fMRIPrep


Read: [fMRIPrep ReadTheDocs site](https://fmriprep.org/en/stable/)
Expand Down Expand Up @@ -77,7 +77,7 @@ ${singularity} run --cleanenv \
Read: [NiBabies ReadTheDocs site](https://nibabies.readthedocs.io/en/latest/)


Nibabies is a robust pre-processing MRI and fMRI workflow that is also a part of the NiPreps family. NiBabies is designed and optimized for human infants between 0-2 years old. For more information on Nibabies code, see [here](https://github.com/nipreps/nibabies). For in-depth usage information, one can utilize [this google doc](https://docs.google.com/document/d/1PW8m1tWWqqgKCAJ9XLpJ5tycPB5gqFodrnvNOIavTAA/edit) or see the [Read the Docs here](https://nibabies.readthedocs.io/en/latest/installation.html).
NiBabies is a robust pre-processing MRI and fMRI workflow that is also a part of the NiPreps family. NiBabies is designed and optimized for human infants between 0-2 years old. For more information on NiBabies code, see [here](https://github.com/nipreps/nibabies). For in-depth usage information, one can utilize [this google doc](https://docs.google.com/document/d/1PW8m1tWWqqgKCAJ9XLpJ5tycPB5gqFodrnvNOIavTAA/edit) or see the [Read the Docs here](https://nibabies.readthedocs.io/en/latest/installation.html).


62. Preferred flags:
Expand All @@ -88,7 +88,7 @@ Nibabies is a robust pre-processing MRI and fMRI workflow that is also a part of

- `--session-id \`: when running a subject with multiple sessions, need to specify which session is being processed as well as the age

- `--derivatives /derivatives \` : Nibabies will use a segmentation from the segmentation pipeline (pre-postBIBSnet). This flag is used to clarify that the precomputed segmentation directory is being utilized.
- `--derivatives /derivatives \` : NiBabies will use a segmentation from the segmentation pipeline (pre-postBIBSnet). This flag is used to clarify that the precomputed segmentation directory is being utilized.

- `--cifti-output 91k \` : Possible choices: 91k, 170k. Output preprocessed BOLD as a CIFTI-dense time series. Optionally, the number of grayordinate can be specified (default is 91k, which equates to 2mm resolution). Default: False

Expand Down Expand Up @@ -231,7 +231,7 @@ The XCP-D workflow takes fMRIPRep, NiBabies, DCAN and HCP outputs in the form of

* `-f 0.3` : framewise displacement threshold for censoring.** Here,** **0.3 is preferred versus a stricter threshold to avoid excluding too much data from the regression model. Stricter thresholding can still be applied when running your analyses on the XCP-D output. **

* `--cifti` : postprocess cifti instead of nifti; this is set default for dcan and hcp input
* `--cifti` : postprocess cifti instead of NIfTI; this is set default for dcan and hcp input

* **[for version 0.2.0 and newer, and “develop” / “unstable” builds dated 10-21-2022 or newer]** `--warp-surfaces-native2std` : resample surfaces to fsLR space, 32k density, and apply the transform from native T1w to MNI152NLin6Asym space output by fMRIPrep / NiBabies

Expand All @@ -245,7 +245,7 @@ The XCP-D workflow takes fMRIPRep, NiBabies, DCAN and HCP outputs in the form of

* `--omp-threads 3` : maximum number of threads per-process

* `--despike` : despike the nifti/cifti before postprocessing
* `--despike` : despike the NIfTI/cifti before postprocessing

* `-w /work` : used to specify a working directory within the container’s filesystem, named _/work_.

Expand Down
2 changes: 1 addition & 1 deletion docs/precision.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Read:_ [Rapid Precision Functional Mapping of Individuals Using Multi-Echo fMRI
**Parcellated functional connectivity (FC)**


**_Time x FC reliability curve._** The script at &lt;location> generates a reliability curve from parcellated BOLD timeseries files (.ptseries.nii) in an XCP-D derivatives directory. It is a Slurm sbatch script which requires a list of positional arguments to run.
**_Time x FC reliability curve._** The script at &lt;location> generates a reliability curve from parcellated BOLD timeseries files (`.ptseries.nii`) in an XCP-D derivatives directory. It is a Slurm sbatch script which requires a list of positional arguments to run.


The general form of the run command is
Expand Down
4 changes: 2 additions & 2 deletions docs/preprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

For this pipeline, the data first needs to be converted and properly orientated before being ran.

1. Get DICOMS from s3 bucket (`s3://zlab-nhp`)
1. Get DICOMs from s3 bucket (`s3://zlab-nhp`)

1. Convert DICOMS to BIDS and apply NORDIC
1. Convert DICOMs to BIDS and apply NORDIC
- Use the [Dcm2bids3 NORDIC wrapper](nordic.md) -- needs a pair of 10.5T-specific config files to run, and use `--keep-non-nordic` when calling `nordicsbatch.sh` in the post-op command
- Confirm the number of noise volumes per run-- for the Z-Lab 10.5T data, this is usually 5

Expand Down
2 changes: 1 addition & 1 deletion docs/processing-sop.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Below are steps that should be followed in order to ensure your processing effor
4. Once your data is on the MSI, determine if has been (properly) converted to BIDS.
- Even if the person that provided the data to you says it has been successfully converted to BIDS, you should run [CuBIDS](bids.md) on the dataset.
- If you're starting with DICOMs, see [here](dcm2bids.md) for BIDS conversion tips.
- If you have NIFTI files that are not BIDS-compliant, you will more than likely have to write a script to finish the conversion.
- If you have NIfTI files that are not BIDS-compliant, you will more than likely have to write a script to finish the conversion.
5. Create a working directory in the project folder for the share you intend to work on. This is where you will put your job wrappers, logs, and status updates. Make sure to name the folder intelligently based on the study and codebase you are running.
- Make sure to check the `groupquota` to ensure there's room for your process.
```groupquota -g share_name```
Expand Down
2 changes: 1 addition & 1 deletion docs/slurm-params.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ The job script needs to stay the same until the job starts to produce the correc
1. The input and output directory paths are bound to the container’s filesystem using the `-B` syntax. There is also a path to a `.sif` file that will utilize the jobs resources in order to produce the desired outputs using what's in the input directory. This .sif file is a[ singularity image ](containers.md)that is being run on the specified input files. The input path is described above as `/home/faird/shared/projects/teaching/practicals/experiments/dcm2bids_output2/derivatives/nibabies/`. The output path is `/home/faird/shared/projects/teaching/practicals/experiments/dcm2bids_output2/derivatives/XCPD/` and the path to the singularity image is `/home/faird/shared/code/external/pipelines/ABCD-XCP/xcp-d_unstable04152022a.sif`
* Note: these input, output and singularity image paths need to exist prior to running the sbatch.
2. The pipeline specific flags `--cifti` `-m` `--participant-label` and `-w`.
* `--cifti` will postprocess cifti files instead of niftis
* `--cifti` will postprocess CIFTI files instead of NIfTIs
* `-m` will combine all the runs into one file
* `--participant-label` is a space delimited list of participant identifiers or a single identifier
* `-w` is the path to where intermediate results should be stored. In the above sbatch, it is specified in the line that reads `-B /home/feczk001/shared/projects/Jacob_test/work/:/work \` and is called on again later in this line `-w /work \`
4 changes: 2 additions & 2 deletions docs/slurm.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,9 +130,9 @@ Also available are various commands for job accounting, job management, and envi
4. Change job account (each PI in the lab has their own Slurm group account with its own allocation of resources and queue priority. It is sometimes useful to change accounts to distribute resource requests for large processing jobs, or when an account has low queue priority due to heavy usage) : `scontrol update JobId=#### Account=new_group`
- An example command has _Job 234293_ originally submitted under default account _miran045_, and to change it to *feczk001* one would use: `scontrol update JobId=234293 Account=feczk001`

5. To change a slurm job partition use: `scontrol update JobId=#### Partition=new_partition`
5. To change a SLURM job partition use: `scontrol update JobId=#### Partition=new_partition`
- An example command has _Job 234293_ originally submitted with the partition _msismall_, and to change it to *msigpu* one would use: `scontrol update JobId=234293 Partition=msigpu`

6. To change the amount of time a slurm job runs for, use: `scontrol update JobId=#### EndTime=HH:MM:SS`
6. To change the amount of time a SLURM job runs for, use: `scontrol update JobId=#### EndTime=HH:MM:SS`
- To find time information, first use `scontrol show JobId=####`
- An example command has _Job 234293_ originally submitted at the following time for 96 hours: _StartTime=2022-08-29T13:04:45_, and to change it to 48 hours one would use: `scontrol update JobId=234293 EndTime=2022-08-31T13:04:45`
2 changes: 1 addition & 1 deletion docs/status.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Processing Status

Produce status html on the output data to see how many processing jobs succeeded and failed (only available for abcd-hcp and infant-abcd-bids pipelines).
Produce status html on the output data to see how many processing jobs succeeded and failed (only available for abcd-hcp-pipeline and infant-abcd-bids pipelines).

1. If your dataset is more than 10 subjects, start an interactive session first: `srun -N 1 --cpus-per-task=1 --mem-per-cpu=5gb -A feczk001 -t 6:00:00 -p interactive --x11 --pty bash`

Expand Down
Loading

0 comments on commit 6c6373c

Please sign in to comment.