Skip to content

Parallel preprocessing on LISA

Chris Klink edited this page Jan 20, 2021 · 7 revisions

On the LISA cluster, you can run the preprocessing in (at least) 2 ways:

  1. Slice-by-slice alignment in parallel, runs in series.
  2. Slice-by-slice alignment and runs bot in parallel.

For option 2 (which is recommended, because it is faster), you need to employ the bids_preproc_parallel_runs.py script that is located in NHP-BIDS/code/subcode. You can do this is part of another job file (e.g., to execute multiple steps at once) or interactively to avoid having to wait in queue twice. This script in itself will be fast and brief, but it creates jobs that will go into the queue and are ubject to SLURM scheduling rules. The script takes the following input arguments:

Arguments/flags

  • --csv The original csv-file defining multiple runs.
  • --subses A string marker used to create subfolders. Strongly recommended to use <sub><sess>, so e.g. Eddy20191212.
  • --no-warp Include this flag to prevent running the bids_warp2nmt_workflow.py as well. Omit it (recommended) and the warp will be done.

Procedure

  • The script will read the original csv-file and parse it.
  • For each unique run in the csv-file, it creates a new csv-file that specifies only that run.
  • A companion job-file is also created for each single-run csv-file and made executable (note that we reserve 10h for each job, which should be more than enough).
  • The script then schedules all jobs into SLURM so they can be executed in parallel. Depending on wait time on the cluster, this might get your entire preprocessing done in <6h.
Clone this wiki locally