-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Hurley edited this page Jan 2, 2018
·
6 revisions
The purpose of this wiki is to collect information for using cluster.
This information needs confirmation, and more information may be collected in the future
- 4 head node: sug-login1 ~ sug-login4, each has 4 cores, 8G memory
- 2 head node (more memory): sug-app1 ~ sug-app2, 4 cores, 16G memory, can be only accessed through login node (e.g. ssh sug-app1 within sug-login1)
- x compute node (not sure): each has 24 cores and ~200G memory
There're different queues on the cluster, but I think we're only allowed to run on the analysis queue? You may check the queue status using qstat -Q
For most operations the qsub command is used to submit a script which will be run on the cluster. Some useful Torque parameters (you may include these options in the header of your script):
#!/bin/bash
#PBS -N job_name ### specify job name
#PBS -l nodes=1:ppn=1 ### specify number of nodes and cores requested (default is 1?)
#PBS -l mem=2048mb ### specify amount of memory (default is ?)
#PBS -q analysis ### specify queue
#PBS -V ### export your current environment parameters to the job
#PBS -d directory ### change the working directory
#PBS -e std.err ### redirect stderr to this file
#PBS -o std.out ### redirect stdout to this file
#PBS -A project_code ### specify project code
#PBS -j oe ### Joins standard output and standard error
#PBS -t 1-10 ### define array jobs (in this case will run 10 jobs), use $PBS_ARRAYID to get access to the array index
If you want to check stdout and stderr in real time, do qsub -k oe
when submitting the job
a wiki collection