-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SLURM job scheduler #62
Comments
otherMPI: is for clusters of computers working together, with each computer processing a subset of the problem. It also allows for efficient transfer of data between the nodes. This might not entirely be necessary for our use case, perhaps the batch scheduler, Slurm, is good enough for us. The file can be run and the users
Partition: |
Using the plugin
|
I believe slurm is still worth looking into if you are hoping for any load balancing due to high volume of jobs. There are some resources saying you can launch an interactive job. Singularity also has a built in SPANK plugin that can run slurm jobs inside of singularity containers. Not sure how long the average job is for ORCA users but this is an option! This also allows users to be able to check on how long their job has been running, limit size and running time, etc. Also, does running it interactively with our system currently mean they must keep the program open? @sjackman |
The most commonly used job scheduler with Singularity or bioinformatics analysis jobs is Slurm. Here are its benefits:
Implementation -- singularity_job.slurm
Submit the job
Etc
"we use symbolic links to refer to our network storage systems. As a result, you will need to bind some additional directories to the ones you know of and use directly to ensure that the symbolic link destinations are also bound into the container." hpc nih
Changing TMPDIR from /tmp (8GB per node) to /lscratch (set your own limit)
The text was updated successfully, but these errors were encountered: