` and
A powerful feature is that you can "search" through your command
history, either using the `history` command, or using
`Ctrl-R`:
-$ history
+```
+$ history
1 echo hello
# hit Ctrl-R, type 'echo'
(reverse-i-search)`echo': echo hello
-
+```
### Stopping commands
@@ -85,9 +89,10 @@ They can be thought of as placeholders for things we need to remember.
For example, to print the path to your home directory, we can use the
shell variable named `HOME`:
-$ echo $HOME
+```
+$ echo $HOME
/user/home/gent/vsc400/vsc40000
-
+```
This prints the value of this variable.
@@ -101,29 +106,37 @@ For a full overview of defined environment variables in your current
session, you can use the `env` command. You can sort this
output with `sort` to make it easier to search in:
-$ env | sort
+```
+$ env | sort
...
HOME=/user/home/gent/vsc400/vsc40000
-...
+...
+```
You can also use the `grep` command to search for a piece of
text. The following command will output all VSC-specific variable names
and their values:
-$ env | sort | grep VSC
+```
+$ env | sort | grep VSC
+```
But we can also define our own. this is done with the
`export` command (note: variables are always all-caps as a
convention):
-$ export MYVARIABLE="value"
+```
+$ export MYVARIABLE="value"
+```
It is important you don't include spaces around the `=`
sign. Also note the lack of `$` sign in front of the
variable name.
If we then do
-$ echo $MYVARIABLE
+```
+$ echo $MYVARIABLE
+```
this will output `value`. Note that the quotes are not
included, they were only used when defining the variable to escape
@@ -135,16 +148,20 @@ You can change what your prompt looks like by redefining the
special-purpose variable `$PS1`.
For example: to include the current location in your prompt:
-$ export PS1='\w $'
+```
+$ export PS1='\w $'
~ $ cd test
-~/test $
+~/test $
+```
Note that `~` is short representation of your home
directory.
To make this persistent across session, you can define this custom value
for `$PS1` in your `.profile` startup script:
-$ echo 'export PS1="\w $ " ' >> ~/.profile
+```
+$ echo 'export PS1="\w $ " ' >> ~/.profile
+```
### Using non-defined variables
@@ -153,11 +170,13 @@ Contrary to what you may expect, this does *not* result in error
messages, but the variable is considered to be *empty* instead.
This may lead to surprising results, for example:
-$ export WORKDIR=/tmp/test
-$ pwd
+```
+$ export WORKDIR=/tmp/test
+$ pwd
+/user/home/gent/vsc400/vsc40000
+$ echo $HOME
/user/home/gent/vsc400/vsc40000
-$ echo $HOME
-/user/home/gent/vsc400/vsc40000
+```
To understand what's going on here, see the section on `cd` below.
@@ -189,17 +208,20 @@ Basic information about the system you are logged into can be obtained
in a variety of ways.
We limit ourselves to determining the hostname:
-$ hostname
+```
+$ hostname
gligar01.gligar.os
-$ echo $HOSTNAME
+$ echo $HOSTNAME
gligar01.gligar.os
-
+```
And querying some basic information about the Linux kernel:
-$ uname -a
+```
+$ uname -a
Linux gligar01.gligar.os 2.6.32-573.8.1.el6.ug.x86_64 #1 SMP Mon Nov 16 15:12:09
- CET 2015 x86_64 x86_64 x86_64 GNU/Linux
+ CET 2015 x86_64 x86_64 x86_64 GNU/Linux
+```
## Exercises
diff --git a/mkdocs/docs/HPC/linux-tutorial/hpc_infrastructure.md b/mkdocs/docs/HPC/linux-tutorial/hpc_infrastructure.md
index 764e42208f9..2de27c6f5db 100644
--- a/mkdocs/docs/HPC/linux-tutorial/hpc_infrastructure.md
+++ b/mkdocs/docs/HPC/linux-tutorial/hpc_infrastructure.md
@@ -20,9 +20,10 @@ Space is limited on the cluster's storage. To check your quota, see section
To figure out where your quota is being spent, the `du` (isk sage)
command can come in useful:
-$ du -sh test
+```
+$ du -sh test
59M test
-
+```
Do *not* (frequently) run `du` on directories where large amounts of
data are stored, since that will:
@@ -68,7 +69,7 @@ Hint: `python -c "print(sum(range(1, 101)))"`
- How many modules are available for Python version 3.6.4?
- How many modules get loaded when you load the `Python/3.6.4-intel-2018a` module?
- Which `cluster` modules are available?
-
+
- What's the full path to your personal home/data/scratch directories?
- Determine how large your personal directories are.
- What's the difference between the size reported by `du -sh $HOME` and by `ls -ld $HOME`?
diff --git a/mkdocs/docs/HPC/linux-tutorial/manipulating_files_and_directories.md b/mkdocs/docs/HPC/linux-tutorial/manipulating_files_and_directories.md
index 627bf9e9ef7..32ed9395d67 100644
--- a/mkdocs/docs/HPC/linux-tutorial/manipulating_files_and_directories.md
+++ b/mkdocs/docs/HPC/linux-tutorial/manipulating_files_and_directories.md
@@ -10,21 +10,22 @@ commands short to type.
To print the contents of an entire file, you can use `cat`; to only see
the first or last N lines, you can use `head` or `tail`:
-$ cat one.txt
+```
+$ cat one.txt
1
2
3
4
5
-$ head -2 one.txt
+$ head -2 one.txt
1
2
-$ tail -2 one.txt
+$ tail -2 one.txt
4
5
-
+```
To check the contents of long text files, you can use the `less` or
`more` commands which support scrolling with "<up>", "<down>",
@@ -32,17 +33,20 @@ To check the contents of long text files, you can use the `less` or
## Copying files: "cp"
-$ cp source target
-
+```
+$ cp source target
+```
This is the `cp` command, which copies a file from source to target. To
copy a directory, we use the `-r` option:
-$ cp -r sourceDirectory target
-
+```
+$ cp -r sourceDirectory target
+```
A last more complicated example:
-$ cp -a sourceDirectory target
-
+```
+$ cp -a sourceDirectory target
+```
Here we used the same `cp` command, but instead we gave it the `-a`
option which tells cp to copy all the files and keep timestamps and
@@ -50,26 +54,29 @@ permissions.
## Creating directories: "mkdir"
-$ mkdir directory
-
+```
+$ mkdir directory
+```
which will create a directory with the given name inside the current
directory.
## Renaming/moving files: "mv"
-$ mv source target
-
+```
+$ mv source target
+```
`mv` will move the source path to the destination path. Works for both
directories as files.
## Removing files: "rm"
-Note: there are NO backups, there is no 'trash bin'. If you
-remove files/directories, they are gone.
-$ rm filename
-
+Note: there are NO backups, there is no 'trash bin'. If you
+remove files/directories, they are gone.
+```
+$ rm filename
+```
`rm` will remove a file or directory. (`rm -rf directory` will remove
every file inside a given directory). WARNING: files removed will be
lost forever, there are no backups, so beware when using this command!
@@ -80,8 +87,9 @@ You can remove directories using `rm -r directory`, however, this is
error-prone and can ruin your day if you make a mistake in typing. To
prevent this type of error, you can remove the contents of a directory
using `rm` and then finally removing the directory with:
-$ rmdir directory
-
+```
+$ rmdir directory
+```
## Changing permissions: "chmod"
[//]: # (#sec:chmod)
@@ -114,11 +122,12 @@ Any time you run `ls -l` you'll see a familiar line of `-rwx------` or
similar combination of the letters `r`, `w`, `x` and `-` (dashes). These
are the permissions for the file or directory. (See also the
[previous section on permissions](navigating.md#permissions))
-$ ls -l
+```
+$ ls -l
total 1
-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv
drwxr-x---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon
-
+```
Here, we see that `articleTable.csv` is a file (beginning the line with
`-`) has read and write permission for the user `vsc40000` (`rw-`), and read
@@ -136,12 +145,13 @@ other users have no permissions to look in the directory at all (`---`).
Maybe we have a colleague who wants to be able to add files to the
directory. We use `chmod` to change the modifiers to the directory to
let people in the group write to the directory:
-$ chmod g+w Project_GoldenDragon
-$ ls -l
+```
+$ chmod g+w Project_GoldenDragon
+$ ls -l
total 1
-rw-r--r--. 1 vsc40000 mygroup 4283648 Apr 12 15:13 articleTable.csv
drwxrwx---. 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon
-
+```
The syntax used here is `g+x` which means group was given write
permission. To revoke it again, we use `g-w`. The other roles are `u`
@@ -162,10 +172,11 @@ However, this means that all users in `mygroup` can add or remove files.
This could be problematic if you only wanted one person to be allowed to
help you administer the files in the project. We need a new group. To do
this in the HPC environment, we need to use access control lists (ACLs):
-$ setfacl -m u:otheruser:w Project_GoldenDragon
-$ ls -l Project_GoldenDragon
+```
+$ setfacl -m u:otheruser:w Project_GoldenDragon
+$ ls -l Project_GoldenDragon
drwxr-x---+ 2 vsc40000 mygroup 40 Apr 12 15:00 Project_GoldenDragon
-
+```
This will give the **u**ser `otheruser` permissions to **w**rite to
`Project_GoldenDragon`
@@ -186,30 +197,34 @@ used frequently. This means they will use less space and thus you get
more out of your quota. Some types of files (e.g., CSV files with a lot
of numbers) compress as much as 9:1. The most commonly used compression
format on Linux is gzip. To compress a file using gzip, we use:
-$ ls -lh myfile
+```
+$ ls -lh myfile
-rw-r--r--. 1 vsc40000 vsc40000 4.1M Dec 2 11:14 myfile
-$ gzip myfile
-$ ls -lh myfile.gz
+$ gzip myfile
+$ ls -lh myfile.gz
-rw-r--r--. 1 vsc40000 vsc40000 1.1M Dec 2 11:14 myfile.gz
-
+```
Note: if you zip a file, the original file will be removed. If you unzip
a file, the compressed file will be removed. To keep both, we send the
data to `stdout` and redirect it to the target file:
-$ gzip -c myfile > myfile.gz
-$ gunzip -c myfile.gz > myfile
-
+```
+$ gzip -c myfile > myfile.gz
+$ gunzip -c myfile.gz > myfile
+```
### "zip" and "unzip"
Windows and macOS seem to favour the zip file format, so it's also
important to know how to unpack those. We do this using unzip:
-$ unzip myfile.zip
-
+```
+$ unzip myfile.zip
+```
If we would like to make our own zip archive, we use zip:
-$ zip myfiles.zip myfile1 myfile2 myfile3
-
+```
+$ zip myfiles.zip myfile1 myfile2 myfile3
+```
## Working with tarballs: "tar"
@@ -218,37 +233,42 @@ bigger file.
You will normally want to unpack these files more often than you make
them. To unpack a `.tar` file you use:
-$ tar -xf tarfile.tar
-
+```
+$ tar -xf tarfile.tar
+```
Often, you will find `gzip` compressed `.tar` files on the web. These
are called tarballs. You can recognize them by the filename ending in
`.tar.gz`. You can uncompress these using `gunzip` and then unpacking
them using `tar`. But `tar` knows how to open them using the `-z`
option:
-$ tar -zxf tarfile.tar.gz
-$ tar -zxf tarfile.tgz
-
+```
+$ tar -zxf tarfile.tar.gz
+$ tar -zxf tarfile.tgz
+```
### Order of arguments
Note: Archive programs like `zip`, `tar`, and `jar` use arguments in the
"opposite direction" of copy commands.
-# cp, ln: <source(s)> <target>
-$ cp source1 source2 source3 target
-$ ln -s source target
+```
+# cp, ln: <source(s)> <target>
+$ cp source1 source2 source3 target
+$ ln -s source target
# zip, tar: <target> <source(s)>
-$ zip zipfile.zip source1 source2 source3
-$ tar -cf tarfile.tar source1 source2 source3
-
+$ zip zipfile.zip source1 source2 source3
+$ tar -cf tarfile.tar source1 source2 source3
+```
If you use `tar` with the source files first then the first file will be
overwritten. You can control the order of arguments of `tar` if it helps
you remember:
-$ tar -c source1 source2 source3 -f tarfile.tar
+```
+$ tar -c source1 source2 source3 -f tarfile.tar
+```
## Exercises
diff --git a/mkdocs/docs/HPC/linux-tutorial/navigating.md b/mkdocs/docs/HPC/linux-tutorial/navigating.md
index 030f7b5da54..5bbfb7ba326 100644
--- a/mkdocs/docs/HPC/linux-tutorial/navigating.md
+++ b/mkdocs/docs/HPC/linux-tutorial/navigating.md
@@ -7,12 +7,13 @@ important skill.
## Current directory: "pwd" and "$PWD"
To print the current directory, use `pwd` or `\$PWD`:
-$ cd $HOME
-$ pwd
+```
+$ cd $HOME
+$ pwd
/user/home/gent/vsc400/vsc40000
-$ echo "The current directory is: $PWD"
+$ echo "The current directory is: $PWD"
The current directory is: /user/home/gent/vsc400/vsc40000
-
+```
## Listing files and directories: "ls"
@@ -20,78 +21,99 @@ A very basic and commonly used command is `ls`, which can be
used to list files and directories.
In its basic usage, it just prints the names of files and directories in
-the current directory. For example: $ ls
-afile.txt some_directory
+the current directory. For example:
+```
+$ ls
+afile.txt some_directory
+```
When provided an argument, it can be used to list the contents of a
-directory: $ ls some_directory
-one.txt two.txt
+directory:
+```
+$ ls some_directory
+one.txt two.txt
+```
A couple of commonly used options include:
- detailed listing using `ls -l`:
-: $ ls -l
- total 4224
- -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt
- drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory
+ ```
+ $ ls -l
+ total 4224
+ -rw-rw-r-- 1 vsc40000 vsc40000 2157404 Apr 12 13:17 afile.txt
+ drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory
+ ```
- To print the size information in human-readable form, use the `-h` flag:
-: $ ls -lh
- total 4.1M
- -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt
- drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory
+ ```
+ $ ls -lh
+ total 4.1M
+ -rw-rw-r-- 1 vsc40000 vsc40000 2.1M Apr 12 13:16 afile.txt
+ drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory
+ ```
- also listing hidden files using the `-a` flag:
-: $ ls -lah
- total 3.9M
- drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .
- drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 ..
- -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt
- -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt
- drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory
+ ```
+ $ ls -lah
+ total 3.9M
+ drwxrwxr-x 3 vsc40000 vsc40000 512 Apr 12 13:11 .
+ drwx------ 188 vsc40000 vsc40000 128K Apr 12 12:41 ..
+ -rw-rw-r-- 1 vsc40000 vsc40000 1.8M Apr 12 13:12 afile.txt
+ -rw-rw-r-- 1 vsc40000 vsc40000 0 Apr 12 13:11 .hidden_file.txt
+ drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory
+ ```
- ordering files by the most recent change using `-rt`:
-: $ ls -lrth
- total 4.0M
- drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory
- -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt
+ ```
+ $ ls -lrth
+ total 4.0M
+ drwxrwxr-x 2 vsc40000 vsc40000 512 Apr 12 12:51 some_directory
+ -rw-rw-r-- 1 vsc40000 vsc40000 2.0M Apr 12 13:15 afile.txt
+ ```
If you try to use `ls` on a file that doesn't exist, you
will get a clear error message:
-$ ls nosuchfile
+
+```
+$ ls nosuchfile
ls: cannot access nosuchfile: No such file or directory
-
+```
## Changing directory: "cd"
To change to a different directory, you can use the `cd`
command:
-$ cd some_directory
+```
+$ cd some_directory
+```
To change back to the previous directory you were in, there's a
shortcut: `cd -`
Using `cd` without an argument results in returning back to
your home directory:
-$ cd
-$ pwd
-/user/home/gent/vsc400/vsc40000
+```
+$ cd
+$ pwd
+/user/home/gent/vsc400/vsc40000
+```
## Inspecting file type: "file"
The `file` command can be used to inspect what type of file
you're dealing with:
-$ file afile.txt
+```
+$ file afile.txt
afile.txt: ASCII text
-$ file some_directory
+$ file some_directory
some_directory: directory
-
+```
## Absolute vs relative file paths
@@ -118,9 +140,11 @@ There are two special relative paths worth mentioning:
You can also use `..` when constructing relative paths, for
example:
-$ cd $HOME/some_directory
-$ ls ../afile.txt
-../afile.txt
+```
+$ cd $HOME/some_directory
+$ ls ../afile.txt
+../afile.txt
+```
## Permissions
@@ -130,8 +154,10 @@ Each file and directory has particular *permissions* set on it, which
can be queried using `ls -l`.
For example:
-$ ls -l afile.txt
--rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt
+```
+$ ls -l afile.txt
+-rw-rw-r-- 1 vsc40000 agroup 2929176 Apr 12 13:29 afile.txt
+```
The `-rwxrw-r--` specifies both the type of file
(`-` for files, `d` for directories (see first
@@ -164,19 +190,23 @@ later in this manual.
matching given criteria.
For example, to look for the file named `one.txt`:
-$ cd $HOME
-$ find . -name one.txt
-./some_directory/one.txt
+```
+$ cd $HOME
+$ find . -name one.txt
+./some_directory/one.txt
+```
To look for files using incomplete names, you can use a wildcard
`*`; note that you need to escape the `*` to
avoid that Bash *expands* it into `afile.txt` by adding
double quotes:
-$ find . -name "*.txt"
+```
+$ find . -name "*.txt"
./.hidden_file.txt
./afile.txt
./some_directory/one.txt
-./some_directory/two.txt
+./some_directory/two.txt
+```
A more advanced use of the `find` command is to use the
`-exec` flag to perform actions on the found file(s), rather
diff --git a/mkdocs/docs/HPC/linux-tutorial/uploading_files.md b/mkdocs/docs/HPC/linux-tutorial/uploading_files.md
index 5df09b24f32..59948c9b063 100644
--- a/mkdocs/docs/HPC/linux-tutorial/uploading_files.md
+++ b/mkdocs/docs/HPC/linux-tutorial/uploading_files.md
@@ -24,8 +24,9 @@ sbatch: error: instead of expected UNIX line breaks (\n).
To fix this problem, you should run the ``dos2unix`` command on the file:
-$ dos2unix filename
-
+```
+$ dos2unix filename
+```
## Symlinks for data/scratch
[//]: # (sec:symlink-for-data)
@@ -40,15 +41,16 @@ This will create 4 symbolic links {% if OS == windows %}
(they're like "shortcuts" on your desktop)
{% endif %} pointing to the respective storages:
-$ cd $HOME
-$ ln -s $VSC_SCRATCH scratch
-$ ln -s $VSC_DATA data
-$ ls -l scratch data
+```
+$ cd $HOME
+$ ln -s $VSC_SCRATCH scratch
+$ ln -s $VSC_DATA data
+$ ls -l scratch data
lrwxrwxrwx 1 vsc40000 vsc40000 31 Mar 27 2009 data ->
/user/data/gent/vsc400/vsc40000
lrwxrwxrwx 1 vsc40000 vsc40000 34 Jun 5 2012 scratch ->
/user/scratch/gent/vsc400/vsc40000
-
+```
@@ -83,7 +85,9 @@ Installing `rsync` is the easiest on Linux: it comes pre-installed with
a lot of distributions.
For example, to copy a folder with lots of CSV files:
-$ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/
+```
+$ rsync -rzv testfolder vsc40000@login.hpc.ugent.be:data/
+```
will copy the folder `testfolder` and its contents to `$VSC_DATA` on the
, assuming the `data` symlink is present in your home directory, see
@@ -98,7 +102,9 @@ To copy large files using `rsync`, you can use the `-P` flag: it enables
both showing of progress and resuming partially downloaded files.
To copy files to your local computer, you can also use `rsync`:
-$ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder
+```
+$ rsync -rzv vsc40000@login.hpc.ugent.be:data/bioset local_folder
+```
This will copy the folder `bioset` and its contents on `$VSC_DATA`
to a local folder named `local_folder`.
diff --git a/mkdocs/docs/HPC/multi_core_jobs.md b/mkdocs/docs/HPC/multi_core_jobs.md
index 9c527db4eae..00834138cbd 100644
--- a/mkdocs/docs/HPC/multi_core_jobs.md
+++ b/mkdocs/docs/HPC/multi_core_jobs.md
@@ -28,79 +28,15 @@ approaches to parallel programming. In addition there are many problem
specific libraries that incorporate parallel capabilities. The next
three sections explore some common approaches: (raw) threads, OpenMP and
MPI.
-
-
-
- Parallel programming approaches
- |
-
-
-
- Tool
- |
-
- Available languages binding
- |
-
- Limitations
- |
-
-
-
- Raw threads pthreads, boost:: threading, ...
- |
-
- Threading libraries are available for all common programming languages
- |
-
- Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for {{ hpc }}. Thread management is hard.
- |
-
-
-
- OpenMP
- |
-
- Fortran/C/C++
- |
-
- Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes.
- |
-
-
-
- Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus
- |
-
- C/C++
- |
-
- Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes.
- |
-
-
-
- MPI
- |
-
- Fortran/C/C++, Python
- |
-
- Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines.
- |
-
-
-
- Global Arrays library
- |
-
- C/C++, Python
- |
-
- Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier.
- |
-
-
+
+| **Tool** | **Available languages binding** | **Limitations** |
+|------------------------------------------------------------------------|------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Raw threads (pthreads, boost::threading, ...) | Threading libraries are available for all common programming languages | Threading libraries are available for all common programming languages & Threads are limited to shared memory systems. They are more often used on single node systems rather than for {{ hpc }}. Thread management is hard. |
+| OpenMP | Fortran/C/C++ | Limited to shared memory systems, but large shared memory systems for HPC are not uncommon (e.g., SGI UV). Loops and task can be parallelized by simple insertion of compiler directives. Under the hood threads are used. Hybrid approaches exist which use OpenMP to parallelize the work load on each node and MPI (see below) for communication between nodes. |
+| Lightweight threads with clever scheduling, Intel TBB, Intel Cilk Plus | C/C++ | Limited to shared memory systems, but may be combined with MPI. Thread management is taken care of by a very clever scheduler enabling the programmer to focus on parallelization itself. Hybrid approaches exist which use TBB and/or Cilk Plus to parallelise the work load on each node and MPI (see below) for communication between nodes. |
+| MPI | Fortran/C/C++, Python | Applies to both distributed and shared memory systems. Cooperation between different nodes or cores is managed by explicit calls to library routines handling communication routines. |
+| Global Arrays library | C/C++, Python | Mimics a global address space on distributed memory systems, by distributing arrays over many nodes and one sided communication. This library is used a lot for chemical structure calculation codes and was used in one of the first applications that broke the PetaFlop barrier. |
+
!!! tip
You can request more nodes/cores by adding following line to your run script.
@@ -150,28 +86,30 @@ runs a simple function that only prints "Hello from thread".
Go to the example directory:
-$ cd ~/{{ exampledir }}
-
+```
+cd ~/{{ exampledir }}
+```
!!! note
If the example directory is not yet present, copy it to your home directory:
- $ cp -r {{ examplesdir }} ~/
+ ```
+ cp -r {{ examplesdir }} ~/
+ ```
Study the example first:
--- T_hello.c --
-
-```C
+```C title="T_hello.c"
{% include "./examples/Multi_core_jobs_Parallel_Computing/T_hello.c" %}
```
And compile it (whilst including the thread library) and run and test it
on the login-node:
-$ module load GCC
-$ gcc -o T_hello T_hello.c -lpthread
-$ ./T_hello
+```
+$ module load GCC
+$ gcc -o T_hello T_hello.c -lpthread
+$ ./T_hello
spawning thread 0
spawning thread 1
spawning thread 2
@@ -182,13 +120,14 @@ spawning thread 3
spawning thread 4
Hello from thread 3!
Hello from thread 4!
-
+```
Now, run it on the cluster and check the output:
-$ qsub T_hello.pbs
+```
+$ qsub T_hello.pbs
{{ jobid }}
-$ more T_hello.pbs.o{{ jobid }}
+$ more T_hello.pbs.o{{ jobid }}
spawning thread 0
spawning thread 1
spawning thread 2
@@ -199,7 +138,7 @@ spawning thread 3
spawning thread 4
Hello from thread 3!
Hello from thread 4!
-
+```
!!! tip
If you plan engaging in parallel programming using threads, this book
@@ -256,18 +195,17 @@ Parallelising for loops is really simple (see code below). By default,
loop iteration counters in OpenMP loop constructs (in this case the i
variable) in the for loop are set to private variables.
--- omp1.c --
-
-```C
+```C title="omp1.c"
{% include "./examples/Multi_core_jobs_Parallel_Computing/omp1.c" %}
```
And compile it (whilst including the "*openmp*" library) and run and
test it on the login-node:
-$ module load GCC
-$ gcc -fopenmp -o omp1 omp1.c
-$ ./omp1
+```
+$ module load GCC
+$ gcc -fopenmp -o omp1 omp1.c
+$ ./omp1
Thread 6 performed 125 iterations of the loop.
Thread 7 performed 125 iterations of the loop.
Thread 5 performed 125 iterations of the loop.
@@ -276,12 +214,13 @@ Thread 0 performed 125 iterations of the loop.
Thread 2 performed 125 iterations of the loop.
Thread 3 performed 125 iterations of the loop.
Thread 1 performed 125 iterations of the loop.
-
+```
Now run it in the cluster and check the result again.
-$ qsub omp1.pbs
-$ cat omp1.pbs.o*
+```
+$ qsub omp1.pbs
+$ cat omp1.pbs.o*
Thread 1 performed 125 iterations of the loop.
Thread 4 performed 125 iterations of the loop.
Thread 3 performed 125 iterations of the loop.
@@ -290,7 +229,7 @@ Thread 5 performed 125 iterations of the loop.
Thread 7 performed 125 iterations of the loop.
Thread 2 performed 125 iterations of the loop.
Thread 6 performed 125 iterations of the loop.
-
+```
### Critical Code
@@ -301,18 +240,17 @@ you do things like updating a global variable with local results from
each thread, and you don't have to worry about things like other threads
writing to that global variable at the same time (a collision).
--- omp2.c --
-
-```C
+```C title="omp2.c"
{% include "./examples/Multi_core_jobs_Parallel_Computing/omp2.c" %}
```
And compile it (whilst including the "*openmp*" library) and run and
test it on the login-node:
-$ module load GCC
-$ gcc -fopenmp -o omp2 omp2.c
-$ ./omp2
+```
+$ module load GCC
+$ gcc -fopenmp -o omp2 omp2.c
+$ ./omp2
Thread 3 is adding its iterations (12500) to sum (0), total is now 12500.
Thread 7 is adding its iterations (12500) to sum (12500), total is now 25000.
Thread 5 is adding its iterations (12500) to sum (25000), total is now 37500.
@@ -322,12 +260,13 @@ Thread 4 is adding its iterations (12500) to sum (62500), total is now 75000.
Thread 1 is adding its iterations (12500) to sum (75000), total is now 87500.
Thread 0 is adding its iterations (12500) to sum (87500), total is now 100000.
Total # loop iterations is 100000
-
+```
Now run it in the cluster and check the result again.
-$ qsub omp2.pbs
-$ cat omp2.pbs.o*
+```
+$ qsub omp2.pbs
+$ cat omp2.pbs.o*
Thread 2 is adding its iterations (12500) to sum (0), total is now 12500.
Thread 0 is adding its iterations (12500) to sum (12500), total is now 25000.
Thread 1 is adding its iterations (12500) to sum (25000), total is now 37500.
@@ -337,7 +276,7 @@ Thread 3 is adding its iterations (12500) to sum (62500), total is now 75000.
Thread 5 is adding its iterations (12500) to sum (75000), total is now 87500.
Thread 6 is adding its iterations (12500) to sum (87500), total is now 100000.
Total # loop iterations is 100000
-
+```
### Reduction
@@ -349,27 +288,27 @@ example above, where we used the "critical code" directive to accomplish
this. The map-reduce paradigm is so common that OpenMP has a specific
directive that allows you to more easily implement this.
--- omp3.c --
-
-```C
+```C title="omp3.c"
{% include "./examples/Multi_core_jobs_Parallel_Computing/omp3.c" %}
```
And compile it (whilst including the "*openmp*" library) and run and
test it on the login-node:
-$ module load GCC
-$ gcc -fopenmp -o omp3 omp3.c
-$ ./omp3
+```
+$ module load GCC
+$ gcc -fopenmp -o omp3 omp3.c
+$ ./omp3
Total # loop iterations is 100000
-
+```
Now run it in the cluster and check the result again.
-$ qsub omp3.pbs
-$ cat omp3.pbs.o*
+```
+$ qsub omp3.pbs
+$ cat omp3.pbs.o*
Total # loop iterations is 100000
-
+```
### Other OpenMP directives
@@ -439,38 +378,36 @@ return the results to the main process, and print the messages.
Study the MPI-programme and the PBS-file:
--- mpi_hello.c --
-
-```C
+```C title="mpi_hello.c"
{% include "./examples/Multi_core_jobs_Parallel_Computing/mpi_hello.c" %}
```
--- mpi_hello.pbs --
-
-```bash
+```bash title="mpi_hello.pbs"
{% include "./examples/Multi_core_jobs_Parallel_Computing/mpi_hello.pbs" %}
```
and compile it:
-$ module load intel
-$ mpiicc -o mpi_hello mpi_hello.c
-
+```
+$ module load intel
+$ mpiicc -o mpi_hello mpi_hello.c
+```
mpiicc is a wrapper of the Intel C++ compiler icc to compile MPI
programs (see [the chapter on compilation](./compiling_your_software.md) for details).
Run the parallel program:
-$ qsub mpi_hello.pbs
-$ ls -l
+```
+$ qsub mpi_hello.pbs
+$ ls -l
total 1024
-rwxrwxr-x 1 {{ userid }} 8746 Sep 16 14:19 mpi_hello*
-rw-r--r-- 1 {{ userid }} 1626 Sep 16 14:18 mpi_hello.c
-rw------- 1 {{ userid }} 0 Sep 16 14:22 mpi_hello.o{{ jobid }}
-rw------- 1 {{ userid }} 697 Sep 16 14:22 mpi_hello.o{{ jobid }}
-rw-r--r-- 1 {{ userid }} 304 Sep 16 14:22 mpi_hello.pbs
-$ cat mpi_hello.o{{ jobid }}
+$ cat mpi_hello.o{{ jobid }}
0: We have 16 processors
0: Hello 1! Processor 1 reporting for duty
0: Hello 2! Processor 2 reporting for duty
@@ -487,7 +424,7 @@ total 1024
0: Hello 13! Processor 13 reporting for duty
0: Hello 14! Processor 14 reporting for duty
0: Hello 15! Processor 15 reporting for duty
-
+```
The runtime environment for the MPI implementation used (often called
mpirun or mpiexec) spawns multiple copies of the program, with the total
diff --git a/mkdocs/docs/HPC/multi_job_submission.md b/mkdocs/docs/HPC/multi_job_submission.md
index 5177d79fa33..d336959cb85 100644
--- a/mkdocs/docs/HPC/multi_job_submission.md
+++ b/mkdocs/docs/HPC/multi_job_submission.md
@@ -48,30 +48,32 @@ scenario that can be reduced to a **MapReduce** approach.[^1]
## The worker Framework: Parameter Sweeps
First go to the right directory:
-$ cd ~/examples/Multi-job-submission/par_sweep
+
+```
+cd ~/examples/Multi-job-submission/par_sweep
+```
Suppose the user wishes to run the "*weather*" program,
which takes three parameters: a temperature, a pressure and a volume. A
typical call of the program looks like:
-$ ./weather -t 20 -p 1.05 -v 4.3
-T: 20 P: 1.05 V: 4.3
+
+```
+$ ./weather -t 20 -p 1.05 -v 4.3
+T: 20 P: 1.05 V: 4.3
+```
For the purpose of this exercise, the weather program is just a simple
bash script, which prints the 3 variables to the standard output and
waits a bit:
-par_sweep/weather
-
-```shell
+```shell title="par_sweep/weather"
{% include "examples/Multi-job-submission/par_sweep/weather" %}
```
A job script that would run this as a job for the first parameters (p01)
would then look like:
-par_sweep/weather_p01.pbs
-
-```shell
+```shell title="par_sweep/weather_p01.pbs"
{% include "examples/Multi-job-submission/par_sweep/weather_p01.pbs" %}
```
@@ -80,7 +82,10 @@ particular instance of the parameters, i.e., temperature = 20, pressure
= 1.05, and volume = 4.3.
To submit the job, the user would use:
-$ qsub weather_p01.pbs
+
+```
+ $ qsub weather_p01.pbs
+```
However, the user wants to run this program for many parameter
instances, e.g., he wants to run the program on 100 instances of
temperature, pressure and volume. The 100 parameter instances can be
@@ -88,14 +93,17 @@ stored in a comma separated value file (.csv) that can be generated
using a spreadsheet program such as Microsoft Excel or RDBMS or just by
hand using any text editor (do **not** use a word processor such as Microsoft
Word). The first few lines of the file "*data.csv*" would look like:
-$ more data.csv
+
+```
+$ more data.csv
temperature, pressure, volume
293, 1.0e5, 107
294, 1.0e5, 106
295, 1.0e5, 105
296, 1.0e5, 104
297, 1.0e5, 103
-...
+...
+```
It has to contain the names of the variables on the first line, followed
by 100 parameter instances in the current example.
@@ -103,9 +111,7 @@ by 100 parameter instances in the current example.
In order to make our PBS generic, the PBS file can be modified as
follows:
-par_sweep/weather.pbs
-
-```shell
+```shell title="par_sweep/weather.pbs"
{% include "examples/Multi-job-submission/par_sweep/weather.pbs" %}
```
@@ -128,10 +134,13 @@ minutes, i.e., 4 hours to be on the safe side.
The job can now be submitted as follows (to check which `worker` module
to use, see subsection [Using explicit version numbers](running_batch_jobs.md#using-explicit-version-numbers)):
-$ module load worker/1.6.12-foss-2021b
-$ wsub -batch weather.pbs -data data.csv
+
+```
+$ module load worker/1.6.12-foss-2021b
+$ wsub -batch weather.pbs -data data.csv
total number of work items: 41
-{{jobid}}
+{{jobid}}
+```
Note that the PBS file is the value of the -batch option. The weather
program will now be run for all 100 parameter instances -- 8
@@ -140,17 +149,26 @@ a parameter instance is called a work item in Worker parlance.
!!! warning
When you attempt to submit a worker job on a non-default cluster, you might encounter an `Illegal instruction` error. In such cases, the solution is to use a different `module swap` command. For example, to submit a worker job to the [`donphan` debug cluster](interactive_debug.md) from the login nodes, use:
- $ module swap env/slurm/donphan
-
+
+ ```
+ module swap env/slurm/donphan
+ ```
+
instead of
- $ module swap cluster/donphan
+
+ ```
+ module swap cluster/donphan
+ ```
We recommend using a `module swap cluster` command after submitting the jobs. Additional information about this as well as more comprehensive details concerning the 'Illegal instruction' error can be accessed [here](troubleshooting.md#multi-job-submissions-on-a-non-default-cluster).
## The Worker framework: Job arrays
[//]: # (sec:worker-framework-job-arrays)
First go to the right directory:
-$ cd ~/examples/Multi-job-submission/job_array
+
+```
+cd ~/examples/Multi-job-submission/job_array
+```
As a simple example, assume you have a serial program called *myprog*
that you want to run on various input files *input\[1-100\]*.
@@ -187,7 +205,10 @@ The details are
script/program to specialise for that job
The job could have been submitted using:
-$ qsub -t 1-100 my_prog.pbs
+
+```
+qsub -t 1-100 my_prog.pbs
+```
The effect was that rather than 1 job, the user would actually submit
100 jobs to the queue system. This was a popular feature of TORQUE, but
@@ -200,9 +221,7 @@ arrays" in its own way.
A typical job script for use with job arrays would look like this:
-job_array/job_array.pbs
-
-```shell
+```shell title="job_array/job_array.pbs"
{% include "examples/Multi-job-submission/job_array/job_array.pbs" %}
```
@@ -213,14 +232,17 @@ with those parameters.
Input for the program is stored in files with names such as input_1.dat,
input_2.dat, ..., input_100.dat in the ./input subdirectory.
-$ ls ./input
+
+```
+$ ls ./input
...
-$ more ./input/input_99.dat
+$ more ./input/input_99.dat
This is input file \#99
Parameter #1 = 99
Parameter #2 = 25.67
Parameter #3 = Batch
-Parameter #4 = 0x562867
+Parameter #4 = 0x562867
+```
For the sole purpose of this exercise, we have provided a short
"test_set" program, which reads the "input" files and just copies them
@@ -229,18 +251,14 @@ file. The corresponding output computed by our "*test_set*" program will
be written to the *"./output*" directory in output_1.dat, output_2.dat,
..., output_100.dat. files.
-job_array/test_set
-
-```shell
+```shell title="job_array/test_set"
{% include "examples/Multi-job-submission/job_array/test_set" %}
```
Using the "worker framework", a feature akin to job arrays can be used
with minimal modifications to the job script:
-job_array/test_set.pbs
-
-```shell
+```shell title="job_array/test_set.pbs"
{% include "examples/Multi-job-submission/job_array/test_set.pbs" %}
```
@@ -253,10 +271,13 @@ Note that
walltime=04:00:00).
The job is now submitted as follows:
-$ module load worker/1.6.12-foss-2021b
-$ wsub -t 1-100 -batch test_set.pbs
+
+```
+$ module load worker/1.6.12-foss-2021b
+$ wsub -t 1-100 -batch test_set.pbs
total number of work items: 100
-{{jobid}}
+{{jobid}}
+```
The "*test_set*" program will now be run for all 100 input files -- 8
concurrently -- until all computations are done. Again, a computation
@@ -265,16 +286,18 @@ work item in Worker speak.
Note that in contrast to TORQUE job arrays, a worker job array only
submits a single job.
-$ qstat
+
+```
+$ qstat
Job id Name User Time Use S Queue
--------------- ------------- --------- ---- ----- - -----
{{jobid}} test_set.pbs {{userid}} 0 Q
And you can now check the generated output files:
-$ more ./output/output_99.dat
+$ more ./output/output_99.dat
This is output file #99
Calculations done, no results
-
+```
## MapReduce: prologues and epilogue
@@ -299,33 +322,36 @@ is executed just once after the work on all work items has finished.
Technically, the master, i.e., the process that is responsible for
dispatching work and logging progress, executes the prologue and
epilogue.
-$ cd ~/examples/Multi-job-submission/map_reduce
+
+```
+cd ~/examples/Multi-job-submission/map_reduce
+```
The script "pre.sh" prepares the data by creating 100 different
input-files, and the script "post.sh" aggregates (concatenates) the
data.
First study the scripts:
-map_reduce/pre.sh
-```shell
+```shell title="map_reduce/pre.sh"
{% include "examples/Multi-job-submission/map_reduce/pre.sh" %}
```
-map_reduce/post.sh
-
-```shell
+```shell title="map_reduce/post.sh"
{% include "examples/Multi-job-submission/map_reduce/post.sh" %}
```
Then one can submit a MapReduce style job as follows:
-$ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100
+
+```
+$ wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100
total number of work items: 100
{{jobid}}
-$ cat all_output.txt
+$ cat all_output.txt
...
-$ rm -r -f ./output/
+$ rm -r -f ./output/
+```
Note that the time taken for executing the prologue and the epilogue
should be added to the job's total walltime.
@@ -356,11 +382,17 @@ from the job's name and the job's ID, i.e., it has the form
`.log`. For the running example, this could be
`run.pbs.log{{jobid}}`, assuming the job's ID is {{jobid}}. To keep an eye on the
progress, one can use:
-$ tail -f run.pbs.log{{jobid}}
+
+```
+tail -f run.pbs.log{{jobid}}
+```
Alternatively, `wsummarize`, a Worker command that summarises a log
file, can be used:
-$ watch -n 60 wsummarize run.pbs.log{{jobid}}
+
+```
+watch -n 60 wsummarize run.pbs.log{{jobid}}
+```
This will summarise the log file every 60 seconds.
@@ -398,13 +430,19 @@ processed. Worker makes it very easy to resume such a job without having
to figure out which work items did complete successfully, and which
remain to be computed. Suppose the job that did not complete all its
work items had ID "445948".
-$ wresume -jobid {{jobid}}
+
+```
+wresume -jobid {{jobid}}
+```
This will submit a new job that will start to work on the work items
that were not done yet. Note that it is possible to change almost all
job parameters when resuming, specifically the requested resources such
as the number of cores and the walltime.
-$ wresume -l walltime=1:30:00 -jobid {{jobid}}}
+
+```
+wresume -l walltime=1:30:00 -jobid {{jobid}}
+```
Work items may fail to complete successfully for a variety of reasons,
e.g., a data file that is missing, a (minor) programming error, etc.
@@ -413,7 +451,10 @@ done, so resuming a job will only execute work items that did not
terminate either successfully, or reporting a failure. It is also
possible to retry work items that failed (preferably after the glitch
why they failed was fixed).
-$ wresume -jobid {{jobid}} -retry
+
+```
+wresume -jobid {{jobid}} -retry
+```
By default, a job's prologue is not executed when it is resumed, while
its epilogue is. "wresume" has options to modify this default behaviour.
@@ -423,7 +464,9 @@ its epilogue is. "wresume" has options to modify this default behaviour.
This how-to introduces only Worker's basic features. The wsub command
has some usage information that is printed when the -help option is
specified:
-$ wsub -help
+
+```
+$ wsub -help
### usage: wsub -batch <batch-file>
# [-data <data-files>]
# [-prolog <prolog-file>]
@@ -453,7 +496,7 @@ specified:
# -t <array-req> : qsub's PBS array request options, e.g., 1-10
# <pbs-qsub-options> : options passed on to the queue submission
# command
-
+```
## Troubleshooting
diff --git a/mkdocs/docs/HPC/mympirun.md b/mkdocs/docs/HPC/mympirun.md
index 98fd91cd0c4..93a55fc44e3 100644
--- a/mkdocs/docs/HPC/mympirun.md
+++ b/mkdocs/docs/HPC/mympirun.md
@@ -12,8 +12,9 @@ README](https://github.com/hpcugent/vsc-mympirun/blob/master/README.md).
Before using `mympirun`, we first need to load its module:
-$ module load vsc-mympirun
-
+```
+module load vsc-mympirun
+```
As an exception, we don't specify a version here. The reason is that we
want to ensure that the latest version of the `mympirun` script is
@@ -47,14 +48,15 @@ The `--hybrid` option requires a positive number. This number specifies
the number of processes started on each available physical *node*. It
will ignore the number of available *cores* per node.
-$ echo $PBS_NUM_NODES
+```
+$ echo $PBS_NUM_NODES
2
-$ mympirun --hybrid 2 ./mpihello
+$ mympirun --hybrid 2 ./mpihello
Hello world from processor node3400.doduo.os, rank 1 out of 4 processors
Hello world from processor node3401.doduo.os, rank 3 out of 4 processors
Hello world from processor node3401.doduo.os, rank 2 out of 4 processors
Hello world from processor node3400.doduo.os, rank 0 out of 4 processors
-
+```
### Other options
@@ -74,6 +76,7 @@ You can do a so-called "dry run", which doesn't have any side-effects,
but just prints the command that `mympirun` would execute. You enable
this with the `--dry-run` flag:
-$ mympirun --dry-run ./mpi_hello
+```
+$ mympirun --dry-run ./mpi_hello
mpirun ... -genv I_MPI_FABRICS shm:dapl ... -np 16 ... ./mpi_hello
-
+```
diff --git a/mkdocs/docs/HPC/openFOAM.md b/mkdocs/docs/HPC/openFOAM.md
index 8f83201d6a4..04ed7a29c77 100644
--- a/mkdocs/docs/HPC/openFOAM.md
+++ b/mkdocs/docs/HPC/openFOAM.md
@@ -71,7 +71,8 @@ First of all, you need to pick and load one of the available `OpenFOAM`
modules. To get an overview of the available modules, run
'`module avail OpenFOAM`'. For example:
-$ module avail OpenFOAM
+```
+$ module avail OpenFOAM
------------------ /apps/gent/CO7/sandybridge/modules/all ------------------
OpenFOAM/v1712-foss-2017b OpenFOAM/4.1-intel-2017a
OpenFOAM/v1712-intel-2017b OpenFOAM/5.0-intel-2017a
@@ -81,7 +82,7 @@ modules. To get an overview of the available modules, run
OpenFOAM/2.4.0-intel-2017a OpenFOAM/5.0-20180108-intel-2018a
OpenFOAM/3.0.1-intel-2016b OpenFOAM/6-intel-2018a (D)
OpenFOAM/4.0-intel-2016b
-
+```
To pick a module, take into account the differences between the
different OpenFOAM versions w.r.t. features and API (see also [Different OpenFOAM releases](./#different-openfoam-releases)). If
@@ -94,8 +95,9 @@ that includes `intel-{{ current_year}}a`.
To prepare your environment for using OpenFOAM, load the `OpenFOAM`
module you have picked; for example:
-$ module load OpenFOAM/4.1-intel-2017a
-
+```
+module load OpenFOAM/11-foss-2023a
+```
### Sourcing the `$FOAM_BASH` script
@@ -107,8 +109,9 @@ location to this script. Assuming you are using `bash` in your shell
session or job script, you should always run the following command after
loading an `OpenFOAM` module:
-$ source $FOAM_BASH
-
+```
+source $FOAM_BASH
+```
### Defining utility functions used in tutorial cases
@@ -117,8 +120,9 @@ If you would like to use the `getApplication`, `runApplication`,
are used in OpenFOAM tutorials, you also need to `source` the
`RunFunctions` script:
-$ source $WM_PROJECT_DIR/bin/tools/RunFunctions
-
+```
+source $WM_PROJECT_DIR/bin/tools/RunFunctions
+```
Note that this needs to be done **after** sourcing `$FOAM_BASH` to make sure
`$WM_PROJECT_DIR` is defined.
@@ -129,8 +133,9 @@ If you are seeing `Floating Point Exception` errors, you can undefine
the `$FOAM_SIGFPE` environment variable that is defined by the
`$FOAM_BASH` script as follows:
-$ unset $FOAM_SIGFPE
-
+```
+unset $FOAM_SIGFPE
+```
Note that this only prevents OpenFOAM from propagating floating point
exceptions, which then results in terminating the simulation. However,
@@ -218,8 +223,9 @@ processes used in a parallel OpenFOAM execution, the
`$MYMPIRUN_VARIABLESPREFIX` environment variable must be defined as
follows, prior to running the OpenFOAM simulation with `mympirun`:
-$ export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI
-
+```
+export MYMPIRUN_VARIABLESPREFIX=WM_PROJECT,FOAM,MPI
+```
Whenever you are instructed to use a command like `mpirun -np ...`,
use `mympirun ...` instead; `mympirun` will automatically detect the
@@ -236,8 +242,9 @@ make sure that the number of subdomains matches the number of processor
cores that will be used by `mympirun`. If not, you may run into an error
message like:
-number of processor directories = 4 is not equal to the number of processors = 16
-
+```
+number of processor directories = 4 is not equal to the number of processors = 16
+```
In this case, the case was decomposed in 4 subdomains, while the
OpenFOAM simulation was started with 16 processes through `mympirun`. To
@@ -264,8 +271,9 @@ by minimising the number of processor boundaries.
To visualise the processor domains, use the following command:
-$ mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(".*.")'
-
+```
+mympirun foamToVTK -parallel -constant -time 0 -excludePatches '(".*.")'
+```
and then load the VTK files generated in the `VTK` folder into ParaView.
@@ -292,7 +300,7 @@ specify in `system/controlDict` (see also
of plane) rather than the entire domain;
- if you do not plan to change the parameters of the OpenFOAM
- simulation while it is running, **set runTimeModifiable to false** to avoid that OpenFOAM re-reads each
+ simulation while it is running, set **runTimeModifiable** to **false** to avoid that OpenFOAM re-reads each
of the `system/*Dict` files at every time step;
- if the results per individual time step are large, consider setting
@@ -322,7 +330,6 @@ See .
Example job script for `damBreak` OpenFOAM tutorial (see also
):
--- OpenFOAM_damBreak.sh --
-```bash
+```bash title="OpenFOAM_damBreak.sh"
{% include "./examples/OpenFOAM/OpenFOAM_damBreak.sh" %}
```
diff --git a/mkdocs/docs/HPC/program_examples.md b/mkdocs/docs/HPC/program_examples.md
index 96fbb42dccd..34138392886 100644
--- a/mkdocs/docs/HPC/program_examples.md
+++ b/mkdocs/docs/HPC/program_examples.md
@@ -2,11 +2,16 @@
# Program examples { #ch:program-examples}
If you have **not done so already** copy our examples to your home directory by running the following command:
- cp -r {{ examplesdir }} ~/
-`~`(tilde) refers to your home directory, the directory you arrive by default when you login.
+```
+ cp -r {{ examplesdir }} ~/
+```
+
+`~`(tilde) refers to your home directory, the directory you arrive by default when you login.
Go to our examples:
-cd ~/{{exampledir}}
+```
+cd ~/{{exampledir}}
+```
Here, we just have put together a number of examples for your
convenience. We did an effort to put comments inside the source files,
@@ -36,27 +41,26 @@ so the source code files are (should be) self-explanatory.
The above 2 OMP directories contain the following examples:
-| C Files | Fortran Files | Description |
-|-------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------|
-| omp_hello.c | omp_hello.f | Hello world |
-| omp_workshare1.c | omp_workshare1.f | Loop work-sharing |
-| omp_workshare2.c | omp_workshare2.f | Sections work-sharing |
-| omp_reduction.c | omp_reduction.f | Combined parallel loop reduction |
-| omp_orphan.c | omp_orphan.f | Orphaned parallel loop reduction |
-| omp_mm.c | omp_mm.f | Matrix multiply |
-| omp_getEnvInfo.c | omp_getEnvInfo.f | Get and print environment information |
-| omp_bug1.c
omp_bug1fix.c
omp_bug2.c
omp_bug3.c
omp_bug4.c
omp_bug4fix
omp_bug5.c
omp_bug5fix.c
omp_bug6.c | omp_bug1.f
omp_bug1fix.f
omp_bug2.f
omp_bug3.f
omp_bug4.f
omp_bug4fix
omp_bug5.f
omp_bug5fix.f
omp_bug6.f
| Programs with bugs and their solution |
+| C Files | Fortran Files | Description |
+|------------------|------------------|---------------------------------------|
+| omp_hello.c | omp_hello.f | Hello world |
+| omp_workshare1.c | omp_workshare1.f | Loop work-sharing |
+| omp_workshare2.c | omp_workshare2.f | Sections work-sharing |
+| omp_reduction.c | omp_reduction.f | Combined parallel loop reduction |
+| omp_orphan.c | omp_orphan.f | Orphaned parallel loop reduction |
+| omp_mm.c | omp_mm.f | Matrix multiply |
+| omp_getEnvInfo.c | omp_getEnvInfo.f | Get and print environment information |
+| omp_bug* | omp_bug* | Programs with bugs and their solution |
Compile by any of the following commands:
-
-
- C: |
- icc -openmp omp_hello.c -o hello\newline pgcc -mp omp_hello.c -o hello\newline gcc -fopenmp omp_hello.c -o hello |
-
-
- Fortran: |
- ifort -openmp omp_hello.f -o hello\newline pgf90 -mp omp_hello.f -o hello\newline gfortran -fopenmp omp_hello.f -o hello |
-
-
+
+| **Language** | **Commands** |
+|--------------|----------------------------------------|
+| **C:** | icc -openmp omp_hello.c -o hello |
+| | pgcc -mp omp_hello.c -o hello |
+| | gcc -fopenmp omp_hello.c -o hello |
+| **Fortran:** | ifort -openmp omp_hello.f -o hello |
+| | pgf90 -mp omp_hello.f -o hello |
+| | gfortran -fopenmp omp_hello.f -o hello |
Be invited to explore the examples.
diff --git a/mkdocs/docs/HPC/quick_reference_guide.md b/mkdocs/docs/HPC/quick_reference_guide.md
index 6141d038567..05de5dfeb77 100644
--- a/mkdocs/docs/HPC/quick_reference_guide.md
+++ b/mkdocs/docs/HPC/quick_reference_guide.md
@@ -3,282 +3,50 @@
Remember to substitute the usernames, login nodes, file names, ...for
your own.
-
-
-
- Login
- |
-
-
-
- Login
- |
-
- ssh {{userid}}@{{loginnode}}
- |
-
-
-
- Where am I?
- |
-
- hostname
- |
-
-
-
- Copy to {{hpc}}
- |
-
- scp foo.txt {{userid}}@{{loginnode}}:
- |
-
-
-
- Copy from {{hpc}}
- |
-
- scp {{userid}}@{{loginnode}}:foo.txt
- |
-
-
-
- Setup ftp session
- |
-
- sftp {{userid}}@{{loginnode}}
- |
-
-
+| **Login** | |
+|-------------------|-----------------------------------------|
+| Login | `ssh {{userid}}@{{loginnode}}` |
+| Where am I? | `hostname` |
+| Copy to {{hpc}} | `scp foo.txt {{userid}}@{{loginnode}}:` |
+| Copy from {{hpc}} | `scp {{userid}}@{{loginnode}}:foo.txt` |
+| Setup ftp session | `sftp {{userid}}@{{loginnode}}` |
-
-
-
- Modules
- |
-
-
-
- List all available modules
- |
-
- Module avail
- |
-
-
-
- List loaded modules
- |
-
- module list
- |
-
-
-
- Load module
- |
-
- module load example
- |
-
-
-
- Unload module
- |
-
- module unload example
- |
-
-
-
- Unload all modules
- |
-
- module purge
- |
-
-
-
- Help on use of module
- |
-
- module help
- |
-
-
+| **Modules** | |
+|----------------------------|-----------------------|
+| List all available modules | Module avail |
+| List loaded modules | module list |
+| Load module | module load example |
+| Unload module | module unload example |
+| Unload all modules | module purge |
+| Help on use of module | module help |
-
-
-
- Jobs
- |
-
-
-
- Submit job with job script script.pbs
- |
-
- qsub script.pbs
- |
-
-
-
- Status of job with ID 12345
- |
-
- qstat 12345
- |
-
-{% if site != (gent or brussel) %}
-
-
- Possible start time of job with ID 12345 (not available everywhere)
- |
-
- showstart 12345
- |
-
-
-
- Check job with ID 12345 (not available everywhere)
- |
-
- checkjob 12345
- |
-
-{% endif %}
-
-
- Show compute node of job with ID 12345
- |
-
- qstat -n 12345
- |
-
-
-
- Delete job with ID 12345
- |
-
- qdel 12345
- |
-
-
-
- Status of all your jobs
- |
-
- qstat
- |
-
-
-
- Detailed status of your jobs + a list nodes they are running on
- |
-
- qstat -na
- |
-
-{% if site != (gent or brussel) %}
-
-
- Show all jobs on queue (not available everywhere)
- |
-
- showq
- |
-
-{% endif %}
-
-
- Submit Interactive job
- |
-
- qsub -I
- |
-
-
+| Command | Description |
+|-----------------------------------------------|---------------------------------------------------------|
+| `qsub script.pbs` | Submit job with job script `script.pbs` |
+| `qstat 12345` | Status of job with ID 12345 |
+{% if site != (gent or brussel) %} | `showstart 12345` | Possible start time of job with ID 12345 (not available everywhere) |
+| `checkjob 12345` | Check job with ID 12345 (not available everywhere) |
+{% endif %} | `qstat -n 12345` | Show compute node of job with ID 12345 |
+| `qdel 12345` | Delete job with ID 12345 |
+| `qstat` | Status of all your jobs |
+| `qstat -na` | Detailed status of your jobs + a list of nodes they are running on |
+{% if site != (gent or brussel) %} | `showq` | Show all jobs on queue (not available everywhere) |
+{% endif %} | `qsub -I` | Submit Interactive job |
-
-
-
- Disk quota
- |
-
-{% if site == gent %}
-
-
- Check your disk quota
- |
-
- see https://account.vscentrum.be
- |
-
-{% else %}
-
-
- Check your disk quota
- |
-
- mmlsquota
- |
-
-
-
- Check your disk quota nice
- |
-
- show_quota.py
- |
-
-{% endif %}
-
-
- Disk usage in current directory (.)
- |
-
- du -h
- |
-
-
-
-
-
- Worker Framework
- |
-
-
-
- Load worker module
- |
-
- module load worker/1.6.12-foss-2021b Don't forget to specify a version. To list available versions, use module avail worker/
- |
-
-
-
- Submit parameter sweep
- |
-
- wsub -batch weather.pbs -data data.csv
- |
-
-
-
- Submit job array
- |
-
- wsub -t 1-100 -batch test_set.pbs
- |
-
-
-
- Submit job array with prolog and epilog
- |
-
- wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100
- |
-
-
+| **Disk quota** | |
+|-----------------------------------------------|-------------------------------------------------|
+{% if site == gent %} | Check your disk quota | see [https://account.vscentrum.be](https://account.vscentrum.be) |
+{% else %} | Check your disk quota | `mmlsquota` |
+| Check your disk quota nice | `show_quota.py` |
+{% endif %} | Disk usage in current directory (`.`) | `du -h` |
+
+
+
+| **Worker Framework** | |
+|-----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|
+| Load worker module | `module load worker/1.6.12-foss-2021b` Don't forget to specify a version. To list available versions, use `module avail worker/` |
+| Submit parameter sweep | `wsub -batch weather.pbs -data data.csv` |
+| Submit job array | `wsub -t 1-100 -batch test_set.pbs` |
+| Submit job array with prolog and epilog | `wsub -prolog pre.sh -batch test_set.pbs -epilog post.sh -t 1-100` |
diff --git a/mkdocs/docs/HPC/running_batch_jobs.md b/mkdocs/docs/HPC/running_batch_jobs.md
index 9eb61a2d09f..9c9eaf554bc 100644
--- a/mkdocs/docs/HPC/running_batch_jobs.md
+++ b/mkdocs/docs/HPC/running_batch_jobs.md
@@ -88,8 +88,9 @@ command, you can replace `module` with `ml`.
A large number of software packages are installed on the {{ hpc }} clusters. A
list of all currently available software can be obtained by typing:
-$ module available
-
+```
+module available
+```
It's also possible to execute `module av` or `module avail`, these are
shorter to type and will do the same thing.
@@ -133,8 +134,9 @@ same toolchain name and version can work together without conflicts.
To "activate" a software package, you load the corresponding module file
using the `module load` command:
-$ module load example
-
+```
+module load example
+```
This will load the most recent version of *example*.
@@ -145,8 +147,9 @@ lexicographical last after the `/`).
**However, you should specify a particular version to avoid surprises when newer versions are installed:
-$ module load secondexample/2.7-intel-2016b
-
+```
+module load secondexample/2.7-intel-2016b
+```
The `ml` command is a shorthand for `module load`: `ml example/1.2.3` is
equivalent to `module load example/1.2.3`.
@@ -154,8 +157,9 @@ equivalent to `module load example/1.2.3`.
Modules need not be loaded one by one; the two `module load` commands
can be combined as follows:
-$ module load example/1.2.3 secondexample/2.7-intel-2016b
-
+```
+module load example/1.2.3 secondexample/2.7-intel-2016b
+```
This will load the two modules as well as their dependencies (unless
there are conflicts between both modules).
@@ -166,14 +170,15 @@ Obviously, you need to be able to keep track of the modules that are
currently loaded. Assuming you have run the `module load` commands
stated above, you will get the following:
-$ module list
+```
+$ module list
Currently Loaded Modulefiles:
1) example/1.2.3 6) imkl/11.3.3.210-iimpi-2016b
2) GCCcore/5.4.0 7) intel/2016b
3) icc/2016.3.210-GCC-5.4.0-2.26 8) examplelib/1.2-intel-2016b
4) ifort/2016.3.210-GCC-5.4.0-2.26 9) secondexample/2.7-intel-2016b
5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26
-
+```
You can also just use the `ml` command without arguments to list loaded modules.
@@ -193,15 +198,16 @@ However, the dependencies of the package are NOT automatically unloaded;
you will have to unload the packages one by one. When the
`secondexample` module is unloaded, only the following modules remain:
-$ module unload secondexample
-$ module list
+```
+$ module unload secondexample
+$ module list
Currently Loaded Modulefiles:
Currently Loaded Modulefiles:
1) example/1.2.3 5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26
2) GCCcore/5.4.0 6) imkl/11.3.3.210-iimpi-2016b
3) icc/2016.3.210-GCC-5.4.0-2.26 7) intel/2016b
4) ifort/2016.3.210-GCC-5.4.0-2.26 8) examplelib/1.2-intel-2016b
-
+```
To unload the `secondexample` module, you can also use
`ml -secondexample`.
@@ -217,8 +223,9 @@ loaded will *not* result in an error.
In order to unload all modules at once, and hence be sure to start in a
clean state, you can use:
-$ module purge
-
+```
+module purge
+```
{% if site == gent -%}
This is always safe: the `cluster` module (the module that specifies
which cluster jobs will get submitted to) will not be unloaded (because
@@ -244,13 +251,15 @@ Consider the following example: the user decides to use the `example`
module and at that point in time, just a single version 1.2.3 is
installed on the cluster. The user loads the module using:
-$ module load example
-
+```
+module load example
+```
rather than
-$ module load example/1.2.3
-
+```
+module load example/1.2.3
+```
Everything works fine, up to the point where a new version of `example`
is installed, 4.5.6. From then on, the user's `load` command will load
@@ -259,28 +268,23 @@ unexpected problems. See for example [the following section on Module Conflicts]
Consider the following `example` modules:
-$ module avail example/
+```
+$ module avail example/
example/1.2.3
example/4.5.6
-
+```
Let's now generate a version conflict with the `example` module, and see
what happens.
-$ module av example/
+```
+$ module av example/
example/1.2.3 example/4.5.6
-$ module load example/1.2.3 example/4.5.6
+$ module load example/1.2.3 example/4.5.6
Lmod has detected the following error: A different version of the 'example' module is already loaded (see output of 'ml').
-$ module swap example/4.5.6
-
-
-
+$ module swap example/4.5.6
+```
Note: A `module swap` command combines the appropriate `module unload`
and `module load` commands.
@@ -289,7 +293,8 @@ and `module load` commands.
With the `module spider` command, you can search for modules:
-$ module spider example
+```
+$ module spider example
--------------------------------------------------------------------------------
example:
--------------------------------------------------------------------------------
@@ -305,11 +310,12 @@ With the `module spider` command, you can search for modules:
module spider example/1.2.3
--------------------------------------------------------------------------------
-
+```
It's also possible to get detailed information about a specific module:
-$ module spider example/1.2.3
+```
+$ module spider example/1.2.3
------------------------------------------------------------------------------------------
example: example/1.2.3
------------------------------------------------------------------------------------------
@@ -337,21 +343,23 @@ It's also possible to get detailed information about a specific module:
More information
================
- Homepage: https://example.com
-
+```
### Get detailed info
To get a list of all possible commands, type:
-$ module help
-
+```
+module help
+```
Or to get more information about one specific module package:
-$ module help example/1.2.3
+```
+$ module help example/1.2.3
----------- Module Specific Help for 'example/1.2.3' ---------------------------
This is just an example - Homepage: https://example.com/
-
+```
### Save and load collections of modules
@@ -364,52 +372,59 @@ In each `module` command shown below, you can replace `module` with
First, load all modules you want to include in the collections:
-$ module load example/1.2.3 secondexample/2.7-intel-2016b
-
+```
+module load example/1.2.3 secondexample/2.7-intel-2016b
+```
Now store it in a collection using `module save`. In this example, the
collection is named `my-collection`.
-$ module save my-collection
-
+```
+module save my-collection
+```
Later, for example in a jobscript or a new session, you can load all
these modules with `module restore`:
-$ module restore my-collection
-
+```
+module restore my-collection
+```
You can get a list of all your saved collections with the
`module savelist` command:
-$ module savelistr
+```
+$ module savelist
Named collection list (For LMOD_SYSTEM_NAME = "CO7-sandybridge"):
1) my-collection
-
+```
To get a list of all modules a collection will load, you can use the
`module describe` command:
-$ module describe my-collection
+```
+$ module describe my-collection
1) example/1.2.3 6) imkl/11.3.3.210-iimpi-2016b
2) GCCcore/5.4.0 7) intel/2016b
3) icc/2016.3.210-GCC-5.4.0-2.26 8) examplelib/1.2-intel-2016b
4) ifort/2016.3.210-GCC-5.4.0-2.26 9) secondexample/2.7-intel-2016b
5) impi/5.1.3.181-iccifort-2016.3.210-GCC-5.4.0-2.26
-
+```
To remove a collection, remove the corresponding file in
`$HOME/.lmod.d`:
-$ rm $HOME/.lmod.d/my-collection
-
+```
+rm $HOME/.lmod.d/my-collection
+```
### Getting module details
To see how a module would change the environment, you can use the
`module show` command:
-$ module show Python/2.7.12-intel-2016b
+```
+$ module show Python/2.7.12-intel-2016b
whatis("Description: Python is a programming language that lets youwork more quickly and integrate your systems more effectively. - Homepage: http://python.org/ ")
conflict("Python")
load("intel/2016b")
@@ -417,7 +432,7 @@ load("bzip2/1.0.6-intel-2016b")
...
prepend_path(...)
setenv("EBEXTSLISTPYTHON","setuptools-23.1.0,pip-8.1.2,nose-1.3.7,numpy-1.11.1,scipy-0.17.1,ytz-2016.4", ...)
-
+```
It's also possible to use the `ml show` command instead: they are
equivalent.
@@ -428,20 +443,6 @@ bunch of extensions: `numpy`, `scipy`, ...
You can also see the modules the `Python/2.7.12-intel-2016b` module
loads: `intel/2016b`, `bzip2/1.0.6-intel-2016b`, ...
-
-
-
If you're not sure what all of this means: don't worry, you don't have to know; just load the module and try to use the software.
## Getting system information about the HPC infrastructure
@@ -455,7 +456,8 @@ information about scheduled downtime, status of the system, ...
To check how much jobs are running in what queues, you can use the
`qstat -q` command:
-$ qstat -q
+```
+$ qstat -q
Queue Memory CPU Time Walltime Node Run Que Lm State
---------------- ------ -------- -------- ---- --- --- -- -----
default -- -- -- -- 0 0 -- E R
@@ -466,7 +468,7 @@ q1h -- -- 01:00:00 -- 0 1 -- E R
q24h -- -- 24:00:00 -- 0 0 -- E R
----- -----
337 82
-
+```
Here, there are 316 jobs running on the `long` queue, and 77 jobs
queued. We can also see that the `long` queue allows a maximum wall time
@@ -482,8 +484,9 @@ filled with jobs, completely filled with jobs, ....
You can also get this information in text form (per cluster separately)
with the `pbsmon` command:
-$ module swap cluster/donphan
-$ pbsmon
+```
+$ module swap cluster/donphan
+$ pbsmon
4001 4002 4003 4004 4005 4006 4007
_ j j j _ _ .
@@ -501,7 +504,7 @@ with the `pbsmon` command:
Node type:
ppn=36, mem=751GB
-
+```
`pbsmon` only outputs details of the cluster corresponding to the
currently loaded `cluster` module see [the section on Specifying the cluster on which to run](./#specifying-the-cluster-on-which-to-run).
@@ -526,14 +529,16 @@ to your home directory, so that you have your **own personal** copy (editable an
over-writable) and that you can start using the examples. If you haven't
done so already, run these commands now:
-$ cd
-$ cp -r {{ examplesdir }} ~/
-
+```
+cd
+cp -r {{ examplesdir }} ~/
+```
First go to the directory with the first examples by entering the
command:
-$ cd ~/examples/Running-batch-jobs
-
+```
+cd ~/examples/Running-batch-jobs
+```
Each time you want to execute a program on the {{ hpc }} you'll need 2 things:
@@ -564,11 +569,12 @@ provided for you in the examples subdirectories.
List and check the contents with:
-$ ls -l
+```
+$ ls -l
total 512
-rw-r--r-- 1 {{ userid }} 193 Sep 11 10:34 fibo.pbs
-rw-r--r-- 1 {{ userid }} 609 Sep 11 10:25 fibo.pl
-
+```
In this directory you find a Perl script (named "fibo.pl") and a job
script (named "fibo.pbs").
@@ -584,7 +590,8 @@ login-node), so that you can see what the program does.
On the command line, you would run this using:
-$ ./fibo.pl
+```
+$ ./fibo.pl
[0] -> 0
[1] -> 1
[2] -> 1
@@ -615,9 +622,9 @@ On the command line, you would run this using:
[27] -> 196418
[28] -> 317811
[29] -> 514229
-
+```
-Remark: Recall that you have now executed the Perl script locally on one of
+Remark: Recall that you have now executed the Perl script locally on one of
the login-nodes of the {{ hpc }} cluster. Of course, this is not our final
intention; we want to run the script on any of the compute nodes. Also,
it is not considered as good practice, if you "abuse" the login-nodes
@@ -630,9 +637,7 @@ since these jobs require very little computing power.
The job script contains a description of the job by specifying the
command that need to be executed on the compute node:
--- fibo.pbs --
-
-```bash
+```bash title="fibo.pbs"
{% include "./examples/Running_batch_jobs/fibo.pbs" %}
```
@@ -644,15 +649,16 @@ specified on the command line.
This job script can now be submitted to the cluster's job system
for execution, using the qsub (Queue SUBmit) command:
-$ qsub fibo.pbs
+```
+$ qsub fibo.pbs
{{ jobid }}
-
+```
The qsub command returns a job identifier on the HPC cluster. The
important part is the number (e.g., "{{ jobid }} "); this is a unique identifier for
the job and can be used to monitor and manage your job.
-Remark: the modules that were loaded when you submitted the job will *not* be
+Remark: the modules that were loaded when you submitted the job will *not* be
loaded when the job is started. You should always specify the
`module load` statements that are required for your job in the job
script itself.
@@ -669,19 +675,21 @@ monitor jobs in the queue.
After your job was started, and ended, check the contents of the
directory:
-$ ls -l
+```
+$ ls -l
total 768
-rw-r--r-- 1 {{ userid }} {{ userid }} 44 Feb 28 13:33 fibo.pbs
-rw------- 1 {{ userid }} {{ userid }} 0 Feb 28 13:33 fibo.pbs.e{{ jobid }}
-rw------- 1 {{ userid }} {{ userid }} 1010 Feb 28 13:33 fibo.pbs.o{{ jobid }}
-rwxrwxr-x 1 {{ userid }} {{ userid }} 302 Feb 28 13:32 fibo.pl
-
+```
Explore the contents of the 2 new files:
-$ more fibo.pbs.o{{ jobid }}
-$ more fibo.pbs.e{{ jobid }}
-
+```
+$ more fibo.pbs.o{{ jobid }}
+$ more fibo.pbs.e{{ jobid }}
+```
These files are used to store the standard output and error that would
otherwise be shown in the terminal window. By default, they have the
@@ -766,8 +774,9 @@ the environment so you get access to all modules installed on the `{{ otherclust
cluster, and to be able to submit jobs to the `{{ othercluster }}` scheduler so your jobs
will start on `{{ othercluster }}` instead of the default `{{ defaultcluster }}` cluster.
-$ module swap cluster/{{ othercluster }}
-
+```
+module swap cluster/{{ othercluster }}
+```
Note: the `{{ othercluster }}` modules may not work directly on the login nodes, because the
login nodes do not have the same architecture as the `{{ othercluster }}` cluster, they have
@@ -778,7 +787,8 @@ this.
To list the available cluster modules, you can use the
`module avail cluster/` command:
-$ module avail cluster/
+```
+$ module avail cluster/
--------------------------------------- /etc/modulefiles/vsc ----------------------------------------
cluster/accelgor (S) cluster/doduo (S,L) cluster/gallade (S) cluster/skitty (S)
cluster/default cluster/donphan (S) cluster/joltik (S)
@@ -789,8 +799,8 @@ To list the available cluster modules, you can use the
D: Default Module
If you need software that is not listed,
-request it via https://www.ugent.be/hpc/en/support/software-installation-request
-
+request it via https://www.ugent.be/hpc/en/support/software-installation-request
+```
As indicated in the output above, each `cluster` module is a so-called
sticky module, i.e., it will not be unloaded when `module purge` (see [the section on purging modules](./#purging-all-modules))
@@ -841,8 +851,9 @@ Using the job ID that `qsub` returned, there are various ways to monitor
the status of your job. In the following commands, replace `12345` with
the job ID `qsub` returned.
-$ qstat 12345
-
+```
+qstat 12345
+```
{% if site != (gent or brussel) %}
To show an estimated start time for your job (note that this may be very
@@ -867,25 +878,28 @@ error messages that may prevent your job from starting:
To show on which compute nodes your job is running, at least, when it is
running:
-$ qstat -n 12345
-
+```
+qstat -n 12345
+```
To remove a job from the queue so that it will not run, or to stop a job
that is already running.
-$ qdel 12345
-
+```
+qdel 12345
+```
When you have submitted several jobs (or you just forgot about the job
ID), you can retrieve the status of all your jobs that are submitted and
are not yet finished using:
-$ qstat
+```
+$ qstat
:
Job ID Name User Time Use S Queue
----------- ------- --------- -------- - -----
{{ jobid }} .... mpi {{ userid }} 0 Q short
-
+```
Here:
@@ -1068,8 +1082,9 @@ properly.
The **qsub** command takes several options to specify the requirements, of which
we list the most commonly used ones below.
-$ qsub -l walltime=2:30:00
-
+```
+qsub -l walltime=2:30:00 ...
+```
For the simplest cases, only the amount of maximum estimated execution
time (called "walltime") is really important. Here, the job requests 2
@@ -1088,8 +1103,9 @@ before the walltime kills your main process, you have to kill the main
command yourself before the walltime runs out and then copy the file
back. See [the section on Running a command with a maximum time limit](../jobscript_examples/#running-a-command-with-a-maximum-time-limit) for how to do this.
-$ qsub -l mem=4gb
-
+```
+qsub -l mem=4gb ...
+```
The job requests 4 GB of RAM memory. As soon as the job tries to use
more memory, it will be "killed" (terminated) by the job scheduler.
@@ -1106,15 +1122,17 @@ per node" and "number of cores in a node" please consult
.
{% endif %}
-$ qsub -l nodes=5:ppn=2
-
+```
+qsub -l nodes=5:ppn=2 ...
+```
The job requests 5 compute nodes with two cores on each node (ppn stands
for "processors per node", where "processors" here actually means
"CPU cores").
-$ qsub -l nodes=1:westmere
-
+```
+qsub -l nodes=1:westmere
+```
The job requests just one node, but it should have an Intel Westmere
processor. A list with site-specific properties can be found in the next
@@ -1123,8 +1141,9 @@ website.
These options can either be specified on the command line, e.g.
-$ qsub -l nodes=1:ppn,mem=2gb fibo.pbs
-
+```
+qsub -l nodes=1:ppn,mem=2gb fibo.pbs
+```
or in the job script itself using the #PBS-directive, so "fibo.pbs"
could be modified to:
@@ -1193,13 +1212,14 @@ located by default in the directory where you issued the *qsub* command.
When you navigate to that directory and list its contents, you should
see them:
-$ ls -l
+```
+$ ls -l
total 1024
-rw-r--r-- 1 {{ userid }} 609 Sep 11 10:54 fibo.pl
-rw-r--r-- 1 {{ userid }} 68 Sep 11 10:53 fibo.pbs
-rw------- 1 {{ userid }} 52 Sep 11 11:03 fibo.pbs.e{{ jobid }}
-rw------- 1 {{ userid }} 1307 Sep 11 11:03 fibo.pbs.o{{ jobid }}
-
+```
In our case, our job has created both output ('fibo.pbs.') and error
files ('fibo.pbs.') containing info written to *stdout* and *stderr*
@@ -1207,11 +1227,12 @@ respectively.
Inspect the generated output and error files:
-$ cat fibo.pbs.o{{ jobid }}
+```
+$ cat fibo.pbs.o{{ jobid }}
...
-$ cat fibo.pbs.e{{ jobid }}
+$ cat fibo.pbs.e{{ jobid }}
...
-
+```
## E-mail notifications
{% if site != gent %}
@@ -1259,15 +1280,17 @@ or
These options can also be specified on the command line. Try it and see
what happens:
-$ qsub -m abe fibo.pbs
-
+```
+qsub -m abe fibo.pbs
+```
The system will use the e-mail address that is connected to your VSC
account. You can also specify an alternate e-mail address with the `-M`
option:
-$ qsub -m b -M john.smith@example.com fibo.pbs
-
+```
+qsub -m b -M john.smith@example.com fibo.pbs
+```
will send an e-mail to john.smith@example.com when the job begins.
@@ -1279,9 +1302,10 @@ might be a problem as they might both be run at the same time.
So the following example might go wrong:
-$ qsub job1.sh
-$ qsub job2.sh
-
+```
+$ qsub job1.sh
+$ qsub job2.sh
+```
You can make jobs that depend on other jobs. This can be useful for
breaking up large jobs into smaller jobs that can be run in a pipeline.
@@ -1289,9 +1313,10 @@ The following example will submit 2 jobs, but the second job (`job2.sh`)
will be held (`H` status in `qstat`) until the first job successfully
completes. If the first job fails, the second will be cancelled.
-$ FIRST_ID=$ (qsub job1.sh)
-$ qsub -W depend=afterok:$FIRST_ID job2.sh
-
+```
+$ FIRST_ID=$(qsub job1.sh)
+$ qsub -W depend=afterok:$FIRST_ID job2.sh
+```
`afterok` means "After OK", or in other words, after the first job
successfully completed.
diff --git a/mkdocs/docs/HPC/running_interactive_jobs.md b/mkdocs/docs/HPC/running_interactive_jobs.md
index f33d546de70..8b97fa70d3b 100644
--- a/mkdocs/docs/HPC/running_interactive_jobs.md
+++ b/mkdocs/docs/HPC/running_interactive_jobs.md
@@ -21,8 +21,9 @@ the computing resources.
The syntax for *qsub* for submitting an interactive PBS job is:
-$ qsub -I <... pbs directives ...>
-
+```
+$ qsub -I <... pbs directives ...>
+```
## Interactive jobs, without X support
@@ -31,18 +32,20 @@ The syntax for *qsub* for submitting an interactive PBS job is:
First of all, in order to know on which computer you're working, enter:
-$ hostname -f
+```
+$ hostname -f
{{ loginhost }}
-
+```
This means that you're now working on the login node ` {{ loginhost }} ` of the cluster.
The most basic way to start an interactive job is the following:
-$ qsub -I
+```
+$ qsub -I
qsub: waiting for job {{ jobid }} to start
qsub: job {{ jobid }} ready
-
+```
There are two things of note here.
@@ -57,9 +60,10 @@ There are two things of note here.
In order to know on which compute-node you're working, enter again:
-$ hostname -f
+```
+$ hostname -f
{{ computenode }}
-
+```
Note that we are now working on the compute-node called "*{{ computenode }}*". This is the
compute node, which was assigned to us by the scheduler after issuing
@@ -87,10 +91,11 @@ Now, go to the directory of our second interactive example and run the
program "primes.py". This program will ask you for an upper limit
($> 1$) and will print all the primes between 1 and your upper limit:
-$ cd ~/{{ exampledir }}
-$ ./primes.py
+```
+$ cd ~/{{ exampledir }}
+$ ./primes.py
This program calculates all primes between 1 and your upper limit.
-Enter your upper limit (>1): 50
+Enter your upper limit (>1): 50
Start Time: 2013-09-11 15:49:06
[Prime#1] = 1
[Prime#2] = 2
@@ -110,12 +115,13 @@ Start Time: 2013-09-11 15:49:06
[Prime#16] = 47
End Time: 2013-09-11 15:49:06
Duration: 0 seconds.
-
+```
You can exit the interactive session with:
-$ exit
-
+```
+$ exit
+```
Note that you can now use this allocated node for 1 hour. After this
hour you will be automatically disconnected. You can change this "usage
@@ -125,8 +131,9 @@ watching the clock on the wall.)
You can work for 3 hours by:
-$ qsub -I -l walltime=03:00:00
-
+```
+qsub -I -l walltime=03:00:00
+```
If the walltime of the job is exceeded, the (interactive) job will be
killed and your connection to the compute node will be closed. So do
@@ -160,9 +167,7 @@ Download the latest version of the XQuartz package on:
and install the XQuartz.pkg
package.
-
![image](img/img0512.png)
-
The installer will take you through the installation procedure, just
continue clicking ++"Continue"++ on the various screens that will pop-up until your
@@ -171,9 +176,7 @@ installation was successful.
A reboot is required before XQuartz will correctly open graphical
applications.
-
![image](img/img0513.png)
-
{% endif %}
{% if OS == windows %}
##### Install Xming
@@ -191,9 +194,7 @@ The first task is to install the Xming software.
4. When selecting the components that need to be installed, make sure
to select "*XLaunch wizard*" and "*Normal PuTTY Link SSH client*".
-
![image](img/img0500.png)
-
5. We suggest to create a Desktop icon for Xming and XLaunch.
@@ -206,28 +207,20 @@ And now we can run Xming:
2. Select ++"Multiple Windows"++. This will open each application in a separate window.
-
![image](img/img0501.png)
-
3. Select ++"Start no client"++ to make XLaunch wait for other programs (such as PuTTY).
-
![image](img/img0502.png)
-
4. Select ++"Clipboard"++ to share the clipboard.
-
![image](img/img0503.png)
-
5. Finally ++"Save configuration"++ into a file. You can keep the default filename and save it
in your Xming installation directory.
-
![image](img/img0504.png)
-
6. Now Xming is running in the background ...
and you can launch a graphical application in your PuTTY terminal.
@@ -237,27 +230,27 @@ And now we can run Xming:
8. In order to test the X-server, run "*xclock*". "*xclock*" is the
standard GUI clock for the X Window System.
-$ xclock
-
+```
+xclock
+```
You should see the XWindow clock application appearing on your Windows
machine. The "*xclock*" application runs on the login-node of the {{ hpc }}, but
is displayed on your Windows machine.
-
![image](img/img0505.png)
-
You can close your clock and connect further to a compute node with
again your X-forwarding enabled:
-$ qsub -I -X
+```
+$ qsub -I -X
qsub: waiting for job {{ jobid }} to start
qsub: job {{ jobid }} ready
-$ hostname -f
+$ hostname -f
{{ computenode }}
-$ xclock
-
+$ xclock
+```
and you should see your clock again.
@@ -309,9 +302,7 @@ the cluster
2. In the "*Category*" pane, expand ++"Connection>SSh"++, and select as show below:
-
![image](img/img0506.png)
-
3. In the ++"Source port"++ field, enter the local port to use (e.g., *5555*).
@@ -334,41 +325,40 @@ running on a compute node on the {{ hpc }}) transferred to your personal screen,
you will need to reconnect to the {{ hpc }} with X-forwarding enabled, which is
done with the "-X" option.
-
![image](img/ch5-interactive-mode.png)
-
First exit and reconnect to the {{ hpc }} with X-forwarding enabled:
-$ exit
-$ ssh -X {{ userid }}@{{ loginnode }}
-$ hostname -f
+```
+$ exit
+$ ssh -X {{ userid }}@{{ loginnode }}
+$ hostname -f
{{ loginhost }}
-
+```
We first check whether our GUIs on the login node are decently forwarded
to your screen on your local machine. An easy way to test it is by
running a small X-application on the login node. Type:
-$ xclock
-
+```
+$ xclock
+```
And you should see a clock appearing on your screen.
-
![image](img/img0507.png)
-
You can close your clock and connect further to a compute node with
again your X-forwarding enabled:
-$ qsub -I -X
+```
+$ qsub -I -X
qsub: waiting for job {{ jobid }} to start
qsub: job {{ jobid }} ready
-$ hostname -f
+$ hostname -f
{{ computenode }}
-$ xclock
-
+$ xclock
+```
and you should see your clock again.
{% endif %}
@@ -380,15 +370,14 @@ screen, but also asks you to click a button.
Now run the message program:
-$ cd ~/{{ exampledir }}
-./message.py
-
+```
+cd ~/{{ exampledir }}
+./message.py
+```
You should see the following message appearing.
-
![image](img/img0508.png)
-
Click any button and see what happens.
diff --git a/mkdocs/docs/HPC/running_jobs_with_input_output_data.md b/mkdocs/docs/HPC/running_jobs_with_input_output_data.md
index c8393da45de..af9bb1bfecd 100644
--- a/mkdocs/docs/HPC/running_jobs_with_input_output_data.md
+++ b/mkdocs/docs/HPC/running_jobs_with_input_output_data.md
@@ -13,18 +13,22 @@ and where that you can collect your results.
First go to the directory:
-$ cd ~/{{ exampledir }}
-
+```
+cd ~/{{ exampledir }}
+```
!!! note
If the example directory is not yet present, copy it to your home directory:
- $ cp -r {{ examplesdir }} ~/
+ ```
+cp -r {{ examplesdir }} ~/
+ ```
List and check the contents with:
-ls -l
+```
+$ ls -l
total 2304
-rwxrwxr-x 1 {{ userid }} 682 Sep 13 11:34 file1.py
-rw-rw-r-- 1 {{ userid }} 212 Sep 13 11:54 file1a.pbs
@@ -34,13 +38,12 @@ total 2304
-rwxrwxr-x 1 {{ userid }} 2393 Sep 13 10:40 file2.py
-rw-r--r-- 1 {{ userid }} 1393 Sep 13 10:41 file3.pbs
-rwxrwxr-x 1 {{ userid }} 2393 Sep 13 10:40 file3.py
-
+```
Now, let us inspect the contents of the first executable (which is just
a Python script with execute permission).
--- file1.py --
-```python
+```python title="file1.py"
{% include "./examples/Running_jobs_with_input_output_data/file1.py" %}
```
@@ -55,8 +58,7 @@ The code of the Python script, is self explanatory:
Check the contents of the first job script:
--- file1a.pbs --
-```bash
+```bash title="file1a.pbs"
{% include "./examples/Running_jobs_with_input_output_data/file1a.pbs" %}
```
@@ -66,13 +68,15 @@ paths.
Submit it:
-$ qsub file1a.pbs
-
+```
+qsub file1a.pbs
+```
After the job has finished, inspect the local directory again, i.e., the
directory where you executed the *qsub* command:
-ls -l
+```
+$ ls -l
total 3072
-rw-rw-r-- 1 {{ userid }} 90 Sep 13 13:13 Hello.txt
-rwxrwxr-x 1 {{ userid }} 693 Sep 13 13:03 file1.py*
@@ -85,7 +89,7 @@ total 3072
-rwxrwxr-x 1 {{ userid }} 2393 Sep 13 10:40 file2.py*
-rw-r--r-- 1 {{ userid }} 1393 Sep 13 10:41 file3.pbs
-rwxrwxr-x 1 {{ userid }} 2393 Sep 13 10:40 file3.py*
-
+```
Some observations:
@@ -99,11 +103,12 @@ Some observations:
Inspect their contents ... and remove the files
-$ cat Hello.txt
-$ cat file1a.pbs.o{{ jobid }}
-$ cat file1a.pbs.e{{ jobid }}
-$ rm Hello.txt file1a.pbs.o{{ jobid }} file1a.pbs.e{{ jobid }}
-
+```
+$ cat Hello.txt
+$ cat file1a.pbs.o{{ jobid }}
+$ cat file1a.pbs.e{{ jobid }}
+$ rm Hello.txt file1a.pbs.o{{ jobid }} file1a.pbs.e{{ jobid }}
+```
!!! tip
Type `cat H` and press the Tab button (looks like ++tab++), and it will **expand** into
@@ -113,18 +118,18 @@ Inspect their contents ... and remove the files
Check the contents of the job script and execute it.
--- file1b.pbs --
-```bash
+```bash title="file1b.pbs"
{% include "./examples/Running_jobs_with_input_output_data/file1b.pbs" %}
```
Inspect the contents again ... and remove the generated files:
-$ ls
+```
+$ ls
Hello.txt file1a.pbs file1c.pbs file2.pbs file3.pbs my_serial_job.e{{ jobid }}
file1.py* file1b.pbs file2.py* file3.py* my_serial_job.o{{ jobid }}
-$ rm Hello.txt my_serial_job.*
-
+$ rm Hello.txt my_serial_job.*
+```
Here, the option "`-N`" was used to explicitly assign a name to the job.
This overwrote the JOBNAME variable, and resulted in a different name
@@ -137,8 +142,7 @@ defaults to the name of the job script.
You can also specify the name of *stdout* and *stderr* files explicitly
by adding two lines in the job script, as in our third example:
--- file1c.pbs --
-```bash
+```bash title="file1c.pbs"
{% include "./examples/Running_jobs_with_input_output_data/file1c.pbs" %}
```
@@ -156,98 +160,21 @@ store your data depends on the purpose, but also the size and type of
usage of the data.
The following locations are available:
-
-
-
- Variable
- |
-
- Description
- |
-
-
-
- Long-term storage slow filesystem, intended for smaller files
- |
-
-
-
- $VSC_HOME
- |
-
- For your configuration files and other small files, see the section on your home directory.
- The default directory is user/{{ site }}/xxx/{{ userid }}.
- The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites.
- |
-
-
-
- $VSC_DATA
- |
-
- A bigger "workspace", for datasets, results, logfiles, etc. see the section on your data directory.
- The default directory is data/{{ site }}/xxx/{{ userid }}.
- The same file system is accessible from all sites.
- |
-
-
-
- Fast temporary storage
- |
-
-
-
- $VSC_SCRATCH_NODE
- |
-
- For temporary or transient data on the local compute node, where fast access is important; see the section on your scratch space.
- This space is available per node. The default directory is /tmp. On different nodes, you'll see different content.
- |
-
-
-
- $VSC_SCRATCH
- |
-
- For temporary or transient data that has to be accessible from all nodes of a cluster (including the login nodes)
- The default directory is scratch/{{ site }}/xxx/{{ userid }}. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content.
- |
-
-
-
- $VSC_SCRATCH_SITE
- |
-
- Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared accross all clusters at a site in the future. See the section on your scratch space.
- |
-
-
-
- $VSC_SCRATCH_GLOBAL
- |
-
- Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared accross all clusters of the VSC in the future. See the section on your scratch space.
- |
-
-{% if site == gent %}
-
-
- $VSC_SCRATCH_CLUSTER
- |
-
- The scratch filesystem closest to the cluster.
- |
-
-
-
- $VSC_SCRATCH_ARCANINE
- |
-
- A separate (smaller) shared scratch filesystem, powered by SSDs. This scratch filesystem is intended for very I/O-intensive workloads.
- |
-
+
+| **Variable** | **Description** |
+|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| | *Long-term storage slow filesystem, intended for smaller files* |
+| `$VSC_HOME` | For your configuration files and other small files, see [the section on your home directory.](./#your-home-directory-vsc_home) The default directory is `user/{{ site }}/xxx/{{ userid }}`. The same file system is accessible from all sites, i.e., you'll see the same contents in $VSC_HOME on all sites. |
+| `$VSC_DATA` | A bigger "workspace", for **datasets**, results, logfiles, etc. see [the section on your data directory.](./#your-data-directory-vsc_data) The default directory is `data/{{ site }}/xxx/{{ userid }}`. The same file system is accessible from all sites. |
+| | *Fast temporary storage* |
+| `$VSC_SCRATCH_NODE` | For **temporary** or transient data on the local compute node, where fast access is important; see [the section on your scratch space.](./#your-scratch-space-vsc_scratch) This space is available per node. The default directory is `/tmp`. On different nodes, you'll see different content. |
+| `$VSC_SCRATCH` | For **temporary** or transient data that has to be accessible from all nodes of a cluster (including the login nodes). The default directory is `scratch/{{ site }}/xxx/{{ userid }}`. This directory is cluster- or site-specific: On different sites, and sometimes on different clusters on the same site, you'll get a different directory with different content. |
+| `$VSC_SCRATCH_SITE` | Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters at a site in the future. See [the section on your scratch space.](./#your-scratch-space-vsc_scratch) |
+| `$VSC_SCRATCH_GLOBAL` | Currently the same as $VSC_SCRATCH, but could be used for a scratch space shared across all clusters of the VSC in the future. See [the section on your scratch space.](./#your-scratch-space-vsc_scratch) |
+ {% if site == gent %} | `$VSC_SCRATCH_CLUSTER` | The scratch filesystem closest to the cluster. |
+| `$VSC_SCRATCH_ARCANINE` | A separate (smaller) shared scratch filesystem, powered by SSDs. This scratch filesystem is intended for very I/O-intensive workloads. |
{% endif %}
-
+
Since these directories are not necessarily mounted on the same
locations over all sites, you should always (try to) use the environment
@@ -379,15 +306,17 @@ access your UGent home drive and shares. To allow this you need a
ticket. This requires that you first authenticate yourself with your
UGent username and password by running:
-$ kinit yourugentusername@UGENT.BE
+```
+$ kinit yourugentusername@UGENT.BE
Password for yourugentusername@UGENT.BE:
-
+```
Now you should be able to access your files running
-$ ls /UGent/yourugentusername
+```
+$ ls /UGent/yourugentusername
home shares www
-
+```
Please note the shares will only be mounted when you access this folder.
You should specify your complete username - tab completion will not
@@ -396,48 +325,54 @@ work.
If you want to use the UGent shares longer than 24 hours, you should ask
a ticket for up to a week by running
-$ kinit yourugentusername@UGENT.BE -r 7d
-
+```
+kinit yourugentusername@UGENT.BE -r 7
+```
You can verify your authentication ticket and expiry dates yourself by
running klist
-$ klist
+```
+$ klist
...
Valid starting Expires Service principal
14/07/20 15:19:13 15/07/20 01:19:13 krbtgt/UGENT.BE@UGENT.BE
renew until 21/07/20 15:19:13
-
+```
Your ticket is valid for 10 hours, but you can renew it before it
expires.
To renew your tickets, simply run
-$ kinit -R
-
+```
+kinit -R
+```
If you want your ticket to be renewed automatically up to the maximum
expiry date, you can run
-$ krenew -b -K 60
-
+```
+krenew -b -K 60
+```
Each hour the process will check if your ticket should be renewed.
We strongly advise to disable access to your shares once it is no longer
needed:
-$ kdestroy
-
+```
+kdestroy
+```
If you get an error "*Unknown credential cache type while getting
default ccache*" (or similar) and you use conda, then please deactivate conda
before you use the commands in this chapter.
-$ conda deactivate
-
+```
+conda deactivate
+```
### UGent shares with globus
@@ -447,7 +382,8 @@ endpoint. To do that, you have to ssh to the globus endpoint from a
loginnode. You will be prompted for your UGent username and password to
authenticate:
-$ ssh globus
+```
+$ ssh globus
UGent username:ugentusername
Password for ugentusername@UGENT.BE:
Shares are available in globus endpoint at /UGent/ugentusername/
@@ -460,16 +396,17 @@ Valid starting Expires Service principal
renew until 05/08/20 15:56:40
Tickets will be automatically renewed for 1 week
Connection to globus01 closed.
-
+```
Your shares will then be available at /UGent/ugentusername/ under the
globus VSC tier2 endpoint. Tickets will be renewed automatically for 1
week, after which you'll need to run this again. We advise to disable
access to your shares within globus once access is no longer needed:
-$ ssh globus01 destroy
+```
+$ ssh globus01 destroy
Succesfully destroyed session
-
+```
{% endif %}
### Pre-defined quotas
@@ -568,15 +505,16 @@ Check the Python and the PBS file, and submit the job: Remember that
this is already a more serious (disk-I/O and computational intensive)
job, which takes approximately 3 minutes on the {{ hpc }}.
-$ cat file2.py
-$ cat file2.pbs
-$ qsub file2.pbs
-$ qstat
-$ ls -l
-$ echo $VSC_SCRATCH
-$ ls -l $VSC_SCRATCH
-$ more $VSC_SCRATCH/primes_1.txt
-
+```
+$ cat file2.py
+$ cat file2.pbs
+$ qsub file2.pbs
+$ qstat
+$ ls -l
+$ echo $VSC_SCRATCH
+$ ls -l $VSC_SCRATCH
+$ more $VSC_SCRATCH/primes_1.txt
+```
## Reading Input files
@@ -601,13 +539,14 @@ In this exercise, you will
Check the Python and the PBS file, and submit the job:
-$ cat file3.py
-$ cat file3.pbs
-$ qsub file3.pbs
-$ qstat
-$ ls -l
-$ more $VSC_SCRATCH/primes_2.txt
-
+```
+$ cat file3.py
+$ cat file3.pbs
+$ qsub file3.pbs
+$ qstat
+$ ls -l
+$ more $VSC_SCRATCH/primes_2.txt
+```
## How much disk space do I get?
### Quota
@@ -694,23 +633,25 @@ into the login nodes of that VSC site).
{% else %}
The "`show_quota`" command has been developed to show you the status of
your quota in a readable format:
-$ show_quota
+```
+$ show_quota
VSC_DATA: used 81MB (0%) quota 25600MB
VSC_HOME: used 33MB (1%) quota 3072MB
VSC_SCRATCH: used 28MB (0%) quota 25600MB
VSC_SCRATCH_GLOBAL: used 28MB (0%) quota 25600MB
VSC_SCRATCH_SITE: used 28MB (0%) quota 25600MB
-
+```
or on the UAntwerp clusters
-$ module load scripts
-$ show_quota
+```
+$ module load scripts
+$ show_quota
VSC_DATA: used 81MB (0%) quota 25600MB
VSC_HOME: used 33MB (1%) quota 3072MB
VSC_SCRATCH: used 28MB (0%) quota 25600MB
VSC_SCRATCH_GLOBAL: used 28MB (0%) quota 25600MB
VSC_SCRATCH_SITE: used 28MB (0%) quota 25600MB
-
+```
With this command, you can follow up the consumption of your total disk
quota easily, as it is expressed in percentages. Depending of on which
@@ -725,14 +666,15 @@ directories are responsible for the consumption of your disk space. You
can check the size of all subdirectories in the current directory with
the "`du`" (**Disk Usage**) command:
-$ du
+```
+$ du
256 ./ex01-matlab/log
1536 ./ex01-matlab
768 ./ex04-python
512 ./ex02-python
768 ./ex03-python
5632
-
+```
This shows you first the aggregated size of all subdirectories, and
finally the total size of the current directory "." (this includes files
@@ -741,28 +683,31 @@ stored in the current directory).
If you also want this size to be "human-readable" (and not always the
total number of kilobytes), you add the parameter "-h":
-$ du -h
+```
+$ du -h
256K ./ex01-matlab/log
1.5M ./ex01-matlab
768K ./ex04-python
512K ./ex02-python
768K ./ex03-python
5.5M .
-
+```
If the number of lower level subdirectories starts to grow too big, you
may not want to see the information at that depth; you could just ask
for a summary of the current directory:
-$ du -s
+```
+$ du -s
5632 .
-$ du -s -h
-
+$ du -s -h
+```
If you want to see the size of any file or top-level subdirectory in the
current directory, you could use the following command:
-$ du -h --max-depth 1
+```
+$ du -h --max-depth 1
1.5M ./ex01-matlab
512K ./ex02-python
768K ./ex03-python
@@ -770,7 +715,7 @@ current directory, you could use the following command:
256K ./example.sh
1.5M ./intro-HPC.pdf
700M ./.cache
-
+```
Finally, if you don't want to know the size of the data in your current
directory, but in some other directory (e.g., your data directory), you
@@ -778,13 +723,14 @@ just pass this directory as a parameter. The command below will show the
disk use in your home directory, even if you are currently in a
different directory:
-$ du -h --max-depth 1 $VSC_HOME
+```
+$ du -h --max-depth 1 $VSC_HOME
22M {{ homedir }}/dataset01
36M {{ homedir }}/dataset02
22M {{ homedir }}/dataset03
3.5M {{ homedir }}/primes.txt
24M {{ homedir }}/.cache
-
+```
{% if site == gent %}
{% else %}
@@ -796,8 +742,9 @@ listing of files.
Try:
-$ tree -s -d
-
+```
+$ tree -s -d
+```
However, we urge you to only use the `du` and `tree` commands when you
really need them as they can put a heavy strain on the file system and
@@ -816,8 +763,9 @@ infrastructure.
To change the group of a directory and it's underlying directories and
files, you can use:
-$ chgrp -R groupname directory
-
+```
+chgrp -R groupname directory
+```
### Joining an existing group
@@ -864,9 +812,10 @@ You can get details about the current state of groups on the HPC
infrastructure with the following command (`example` is the name of the
group we want to inspect):
-$ getent group example
+```
+$ getent group example
example:*:1234567:vsc40001,vsc40002,vsc40003
-
+```
We can see that the VSC id number is 1234567 and that there are three
members in the group: `vsc40001`, `vsc40002` and `vsc40003`.
diff --git a/mkdocs/docs/HPC/sites/antwerpen/available-modules.md b/mkdocs/docs/HPC/sites/antwerpen/available-modules.md
index 474d3ab5d1b..71c8382b85c 100644
--- a/mkdocs/docs/HPC/sites/antwerpen/available-modules.md
+++ b/mkdocs/docs/HPC/sites/antwerpen/available-modules.md
@@ -1,4 +1,5 @@
-$ module av 2>&1 | more
+```
+$ module av 2>&1 | more
------------- /apps/antwerpen/modules/hopper/2015a/all ------------
ABINIT/7.10.2-intel-2015a
ADF/2014.05
@@ -9,16 +10,17 @@ Boost/1.57.0-intel-2015a-Python-2.7.9
bzip2/1.0.6-foss-2015a
bzip2/1.0.6-intel-2015a
...
-
+```
Or when you want to check whether some specific software, some compiler
or some application (e.g., LAMMPS) is installed on the {{hpc}}.
-$ module av 2>&1 | grep -i -e "LAMMPS"
+```
+$ module av 2>&1 | grep -i -e "LAMMPS"
LAMMPS/9Dec14-intel-2015a
LAMMPS/30Oct14-intel-2014a
LAMMPS/5Sep14-intel-2014a
-
+```
As you are not aware of the capitals letters in the module name, we
looked for a case-insensitive name with the "-i" option.
diff --git a/mkdocs/docs/HPC/sites/gent/available-modules.md b/mkdocs/docs/HPC/sites/gent/available-modules.md
index 13956ab2346..fc246514453 100644
--- a/mkdocs/docs/HPC/sites/gent/available-modules.md
+++ b/mkdocs/docs/HPC/sites/gent/available-modules.md
@@ -1,4 +1,5 @@
-$ module av | more
+```
+module avail
--- /apps/gent/RHEL8/zen2-ib/modules/all ---
ABAQUS/2021-hotfix-2132
ABAQUS/2022-hotfix-2214
@@ -6,16 +7,17 @@
ABAQUS/2023
ABAQUS/2024-hotfix-2405 (D)
...
-
+```
Or when you want to check whether some specific software, some compiler or some
application (e.g., MATLAB) is installed on the {{hpc}}.
-$ module av matlab
+```
+module avail matlab
--- /apps/gent/RHEL8/zen2-ib/modules/all ---
LIBSVM-MATLAB/3.30-GCCcore-11.3.0-MATLAB-2022b-r5
MATLAB/2019b
MATLAB/2021b
MATLAB/2022b-r5 (D)
SPM/12.5_r7771-MATLAB-2021b
-
+```
diff --git a/mkdocs/docs/HPC/torque_options.md b/mkdocs/docs/HPC/torque_options.md
index 55c6e0709f5..cc7da6c4812 100644
--- a/mkdocs/docs/HPC/torque_options.md
+++ b/mkdocs/docs/HPC/torque_options.md
@@ -4,20 +4,20 @@
Below is a list of the most common and useful directives.
-| Option | System type | Description|
-|:------:|:-----------:|:----------|
-| -k | All | Send "stdout" and/or "stderr" to your home directory when the job runs
**#PBS -k o** or **#PBS -k e** or **#PBS -koe** |
-| -l | All | Precedes a resource request, e.g., processors, wallclock |
-| -M | All | Send an e-mail messages to an alternative e-mail address
**#PBS -M me@mymail.be** |
-| -m | All | Send an e-mail address when a job **b**egins execution and/or **e**nds or **a**borts
**#PBS -m b** or **#PBS -m be** or **#PBS -m ba** |
-| mem | Shared Memory | Memory & Specifies the amount of memory you need for a job.
**#PBS -I mem=90gb** |
-| mpiproces | Clusters | Number of processes per node on a cluster. This should equal number of processors on a node in most cases.
**#PBS -l mpiprocs=4** |
-| -N | All | Give your job a unique name
**#PBS -N galaxies1234** |
-| -ncpus | Shared Memory | The number of processors to use for a shared memory job.
**#PBS ncpus=4** |
-| -r | All | ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen.
**#PBS -r n**
**#PBS -r y** |
-| select | Clusters | Number of compute nodes to use. Usually combined with the mpiprocs directive
**#PBS -l select=2**|
-| -V | All | Make sure that the environment in which the job **runs** is the same as the environment in which it was **submitted
#PBS -V**
-| Walltime | All | The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS
**#PBS -l walltime=12:00:00** |
+| Option | System type | Description |
+|:---------:|:-------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| -k | All | Send "stdout" and/or "stderr" to your home directory when the job runs
**#PBS -k o** or **#PBS -k e** or **#PBS -koe** |
+| -l | All | Precedes a resource request, e.g., processors, wallclock |
+| -M | All | Send an e-mail messages to an alternative e-mail address
**#PBS -M me@mymail.be** |
+| -m | All | Send an e-mail address when a job **b**egins execution and/or **e**nds or **a**borts
**#PBS -m b** or **#PBS -m be** or **#PBS -m ba** |
+| mem | Shared Memory | Memory & Specifies the amount of memory you need for a job.
**#PBS -I mem=90gb** |
+| mpiproces | Clusters | Number of processes per node on a cluster. This should equal number of processors on a node in most cases.
**#PBS -l mpiprocs=4** |
+| -N | All | Give your job a unique name
**#PBS -N galaxies1234** |
+| -ncpus | Shared Memory | The number of processors to use for a shared memory job.
**#PBS ncpus=4** |
+| -r | All | ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen.
**#PBS -r n**
**#PBS -r y** |
+| select | Clusters | Number of compute nodes to use. Usually combined with the mpiprocs directive
**#PBS -l select=2** |
+| -V | All | Make sure that the environment in which the job **runs** is the same as the environment in which it was **submitted
#PBS -V** |
+| Walltime | All | The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS
**#PBS -l walltime=12:00:00** |
## Environment Variables in Batch Job Scripts
@@ -55,25 +55,25 @@ When a batch job is started, a number of environment variables are
created that can be used in the batch job script. A few of the most
commonly used variables are described here.
-| Variable | Description |
-|:--------:|:-----------|
-| PBS_ENVIRONMENT | set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. |
-| PBS_JOBID | the job identifier assigned to the job by the batch system. This is the same number you see when you do *qstat*. |
-| PBS_JOBNAME | the job name supplied by the user |
-| PBS_NODEFILE | the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. |
-| PBS_QUEUE | the name of the queue from which the job is executed
-| PBS_O_HOME | value of the HOME variable in the environment in which *qsub* was executed |
-| PBS_O_LANG | value of the LANG variable in the environment in which *qsub* was executed |
-| PBS_O_LOGNAME | value of the LOGNAME variable in the environment in which *qsub* was executed |
-| PBS_O_PATH | value of the PATH variable in the environment in which *qsub* was executed |
-| PBS_O_MAIL | value of the MAIL variable in the environment in which *qsub* was executed |
-| PBS_O_SHELL | value of the SHELL variable in the environment in which *qsub* was executed |
-| PBS_O_TZ | value of the TZ variable in the environment in which *qsub* was executed |
-| PBS_O_HOST | the name of the host upon which the *qsub* command is running |
-| PBS_O_QUEUE | the name of the original queue to which the job was submitted |
-| PBS_O_WORKDIR | the absolute path of the current working directory of the *qsub* command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. |
-| PBS_VERSION | Version Number of TORQUE, e.g., TORQUE-2.5.1 |
-| PBS_MOMPORT | active port for mom daemon |
-| PBS_TASKNUM | number of tasks requested |
-| PBS_JOBCOOKIE | job cookie |
-| PBS_SERVER | Server Running TORQUE |
+| Variable | Description |
+|:---------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| PBS_ENVIRONMENT | set to PBS_BATCH to indicate that the job is a batch job; otherwise, set to PBS_INTERACTIVE to indicate that the job is a PBS interactive job. |
+| PBS_JOBID | the job identifier assigned to the job by the batch system. This is the same number you see when you do *qstat*. |
+| PBS_JOBNAME | the job name supplied by the user |
+| PBS_NODEFILE | the name of the file that contains the list of the nodes assigned to the job . Useful for Parallel jobs if you want to refer the node, count the node etc. |
+| PBS_QUEUE | the name of the queue from which the job is executed |
+| PBS_O_HOME | value of the HOME variable in the environment in which *qsub* was executed |
+| PBS_O_LANG | value of the LANG variable in the environment in which *qsub* was executed |
+| PBS_O_LOGNAME | value of the LOGNAME variable in the environment in which *qsub* was executed |
+| PBS_O_PATH | value of the PATH variable in the environment in which *qsub* was executed |
+| PBS_O_MAIL | value of the MAIL variable in the environment in which *qsub* was executed |
+| PBS_O_SHELL | value of the SHELL variable in the environment in which *qsub* was executed |
+| PBS_O_TZ | value of the TZ variable in the environment in which *qsub* was executed |
+| PBS_O_HOST | the name of the host upon which the *qsub* command is running |
+| PBS_O_QUEUE | the name of the original queue to which the job was submitted |
+| PBS_O_WORKDIR | the absolute path of the current working directory of the *qsub* command. This is the most useful. Use it in every job script. The first thing you do is, cd $PBS_O_WORKDIR after defining the resource list. This is because, pbs throw you to your $HOME directory. |
+| PBS_VERSION | Version Number of TORQUE, e.g., TORQUE-2.5.1 |
+| PBS_MOMPORT | active port for mom daemon |
+| PBS_TASKNUM | number of tasks requested |
+| PBS_JOBCOOKIE | job cookie |
+| PBS_SERVER | Server Running TORQUE |
diff --git a/mkdocs/docs/HPC/troubleshooting.md b/mkdocs/docs/HPC/troubleshooting.md
index 462cb839ccf..fd65e6ecad8 100644
--- a/mkdocs/docs/HPC/troubleshooting.md
+++ b/mkdocs/docs/HPC/troubleshooting.md
@@ -98,8 +98,9 @@ and thus requesting multiple cores and/or nodes will only result in wasted resou
If you get from your job output an error message similar to this:
-=>> PBS: job killed: walltime <value in seconds> exceeded limit <value in seconds>
-
+```
+=>> PBS: job killed: walltime exceeded limit
+```
This occurs when your job did not complete within the requested
walltime. See
@@ -119,7 +120,6 @@ option is to request extra quota for your VO to the VO moderator/s. See
section on [Pre-defined user directories](../running_jobs_with_input_output_data/#pre-defined-user-directories) and [Pre-defined quotas](../running_jobs_with_input_output_data/#pre-defined-quotas) for more information about quotas
and how to use the storage endpoints in an efficient way.
{% endif %}
-
## Issues connecting to login node { #sec:connecting-issues}
@@ -128,8 +128,9 @@ the key/lock analogy in [How do SSH keys work?](../account/#how-do-ssh-keys-work
If you have errors that look like:
-{{ userid }}@{{ loginnode }}: Permission denied
-
+```
+{{ userid }}@{{ loginnode }}: Permission denied
+```
or you are experiencing problems with connecting, here is a list of
things to do that should help:
@@ -150,12 +151,13 @@ things to do that should help:
2. Use `ssh-add` (see section [Using an SSH agent](../account/#using-an-ssh-agent-optional)) *OR;*
3. Specify the location of the key in `$HOME/.ssh/config`. You will
need to replace the VSC login id in the `User` field with your own:
- Host {{ hpcname }}
+ ```
+ Host {{ hpcname }}
Hostname {{ loginnode }}
- IdentityFile /path/to/private/key
- User {{ userid }}
-
- Now you can just connect with ssh {{ hpcname }}.
+ IdentityFile /path/to/private/key
+ User {{ userid }}
+ ```
+ Now you can connect with `ssh {{ hpcname }}`.
{% endif %}
4. Please double/triple check your VSC login ID. It should look
@@ -193,8 +195,9 @@ things to do that should help:
{% if OS == windows %}
If you are using PuTTY and get this error message:
-server unexpectedly closed network connection
-
+```
+server unexpectedly closed network connection
+```
it is possible that the PuTTY version you are using is too old and
doesn't support some required (security-related) features.
@@ -217,49 +220,35 @@ and include it in the email.
2. Single click on the saved configuration
-
![image](img/831change01.png)
-
3. Then click ++"Load"++ button
-
![image](img/831change02.png)
-
4. Expand SSH category (on the left panel) clicking on the "+" next
to SSH
-
![image](img/831change03.png)
-
5. Click on Auth under the SSH category
-
![image](img/831change04.png)
-
6. On the right panel, click ++"Browse"++ button
-
![image](img/831change05.png)
-
7. Then search your private key on your computer (with the extension
".ppk")
8. Go back to the top of category, and click Session
-
![image](img/831change06.png)
-
9. On the right panel, click on ++"Save"++ button
-
![image](img/831change07.png)
-
### Check whether your private key in PuTTY matches the public key on the accountpage
@@ -269,28 +258,20 @@ Follow the instructions in [Change PuTTY private key for a saved configuration](
then select all text (push ++"Ctrl"++ + ++"a"++ ), then copy the location of the
private key (push ++"Ctrl"++ + ++"c"++)
-
![image](img/832check05.png)
-
2. Open PuTTYgen
-
![image](img/832check06.png)
-
3. Enter menu item "File" and select "Load Private key"
-
![image](img/832check07.png)
-
4. On the "Load private key" popup, click in the textbox next to
"File name:", then paste the location of your private key (push ++"Ctrl"++ + ++"v"++), then click ++"Open"++
-
![image](img/832check08.png)
-
5. Make sure that your Public key from the "Public key for pasting
into OpenSSH authorized_keys file" textbox is in your "Public
@@ -298,15 +279,14 @@ Follow the instructions in [Change PuTTY private key for a saved configuration](
(Scroll down to the bottom of "View Account" tab, you will find
there the "Public keys" section)
-
![image](img/832check09.png)
-
{% else %}
Please add `-vvv` as a flag to `ssh` like:
-ssh -vvv {{ userid }}@{{ loginnode }}
-
+```
+ssh -vvv {{ userid }}@{{ loginnode }}
+```
and include the output of that command in the message.
{% endif %}
@@ -320,7 +300,8 @@ system you are connecting to has changed.
{% if OS == (linux or macos) %}
-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
+```
+@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
@@ -331,10 +312,10 @@ The fingerprint for the ECDSA key sent by the remote host is
SHA256:1MNKFTfl1T9sm6tTWAo4sn7zyEfiWFLKbk/mlT+7S5s.
Please contact your system administrator.
Add correct host key in ~/.ssh/known_hosts to get rid of this message.
-Offending ECDSA key in ~/.ssh/known_hosts:21
+Offending ECDSA key in ~/.ssh/known_hosts:21
ECDSA host key for {{ loginnode }} has changed and you have requested strict checking.
Host key verification failed.
-
+```
You will need to remove the line it's complaining about (in the example,
line 21). To do that, open `~/.ssh/known_hosts` in an editor, and remove the
@@ -344,8 +325,9 @@ to.
Alternatively you can use the command that might be shown by the warning under
`remove with:` and it should be something like this:
-ssh-keygen -f "~/.ssh/known_hosts" -R "{{loginnode}}"
-
+```
+ssh-keygen -f "~/.ssh/known_hosts" -R "{{loginnode}}"
+```
If the command is not shown, take the file from the "Offending ECDSA key in",
and the host name from "ECDSA host key for" lines.
@@ -356,8 +338,9 @@ After you've done that, you'll need to connect to the {{ hpc }} again. See [Warn
You will need to verify that the fingerprint shown in the dialog matches
one of the following fingerprints:
-{{ puttyFirstConnect }}
-
+```
+{{ puttyFirstConnect }}
+```
**Do not click "Yes" until you verified the fingerprint. Do not press "No" in any case.**
@@ -367,9 +350,7 @@ If it doesn't (like in the example) or you are in doubt, take a screenshot, pres
{% include "../macros/sshedfingerprintnote.md" %}
-
![image](img/putty_security_alert.jpg)
-
{% if site == gent %}
If you use X2Go client, you might get one of the following fingerprints:
@@ -389,14 +370,16 @@ If it doesn't, or you are in doubt, take a screenshot, press "Yes" and contact {
If you get errors like:
-qsub fibo.pbs
+```
+$ qsub fibo.pbs
qsub: script is written in DOS/Windows text format
-
+```
or
-sbatch: error: Batch script contains DOS line breaks (\r\n)
-
+```
+sbatch: error: Batch script contains DOS line breaks (\r\n)
+```
It's probably because you transferred the files from a Windows computer.
See the [section about `dos2unix` in Linux tutorial](../linux-tutorial/uploading_files/#dos2unix) to fix this error.
@@ -405,17 +388,20 @@ See the [section about `dos2unix` in Linux tutorial](../linux-tutorial/uploading
{% if OS == (linux or macos) %}
-ssh {{userid}}@{{loginnode}}
-The authenticity of host {{loginnode}} (<IP-adress>) can't be established.
-<algorithm> key fingerprint is <hash>
+```
+$ ssh {{userid}}@{{loginnode}}
+The authenticity of host {{loginnode}} () can't be established.
+ key fingerprint is
Are you sure you want to continue connecting (yes/no)?
-
+```
Now you can check the authenticity by checking if the line that is at
the place of the underlined piece of text matches one of the following
lines:
-{{opensshFirstConnect}}
+```
+{{opensshFirstConnect}}
+```
{% endif %}
{% if site == gent %}
@@ -473,8 +459,6 @@ you via the `ulimit -v` command *in your job script*.
See [Generic resource requirements](../running_batch_jobs/#generic-resource-requirements) to set memory and other requirements, see [Specifying memory requirements](../fine_tuning_job_specifications/#specifying-memory-requirements) to finetune the amount of
memory you request.
-
{% if site == gent %}
## Module conflicts
@@ -496,7 +480,6 @@ While processing the following module(s):
Module fullname Module Filename
--------------- ---------------
HMMER/3.1b2-intel-2017a /apps/gent/CO7/haswell-ib/modules/all/HMMER/3.1b2-intel-2017a.lua
-
```
This resulted in an error because we tried to load two modules with different
@@ -539,11 +522,12 @@ As a rule of thumb, toolchains in the same row are compatible with each other:
Another common error is:
-$ module load cluster/{{othercluster}}
+```
+$ module load cluster/{{othercluster}}
Lmod has detected the following error: A different version of the 'cluster' module is already loaded (see output of 'ml').
If you don't understand the warning or error, contact the helpdesk at hpc@ugent.be
-
+```
This is because there can only be one `cluster` module active at a time.
The correct command is `module swap cluster/{{othercluster}}`. See also [Specifying the cluster on which to run](../running_batch_jobs/#specifying-the-cluster-on-which-to-run).
@@ -556,22 +540,24 @@ The correct command is `module swap cluster/{{othercluster}}`. See also [Specify
When running software provided through modules (see [Modules](../running_batch_jobs/#modules)), you may run into
errors like:
-$ module swap cluster/donphan
+```
+$ module swap cluster/donphan
The following have been reloaded with a version change:
1) cluster/doduo => cluster/donphan 3) env/software/doduo => env/software/donphan
2) env/slurm/doduo => env/slurm/donphan 4) env/vsc/doduo => env/vsc/donphan
-$ module load Python/3.10.8-GCCcore-12.2.0
-$ python
+$ module load Python/3.10.8-GCCcore-12.2.0
+$ python
Please verify that both the operating system and the processor support
Intel(R) MOVBE, F16C, FMA, BMI, LZCNT and AVX2 instructions.
-
+```
or errors like:
-$ python
+```
+$ python
Illegal instruction
-
+```
When we swap to a different cluster, the available modules change so
they work for that cluster. That means that if the cluster and the login
@@ -587,8 +573,9 @@ all our modules will get reloaded. This means that all current modules
will be unloaded and then loaded again, so they'll work on the newly
loaded cluster. Here's an example of how that would look like:
-$ module load Python/3.10.8-GCCcore-12.2.0
-$ module swap cluster/donphan
+```
+$ module load Python/3.10.8-GCCcore-12.2.0
+$ module swap cluster/donphan
Due to MODULEPATH changes, the following have been reloaded:
1) GCCcore/12.2.0 8) binutils/2.39-GCCcore-12.2.0
@@ -602,7 +589,7 @@ Due to MODULEPATH changes, the following have been reloaded:
The following have been reloaded with a version change:
1) cluster/doduo => cluster/donphan 3) env/software/doduo => env/software/donphan
2) env/slurm/doduo => env/slurm/donphan 4) env/vsc/doduo => env/vsc/donphan
-
+```
This might result in the same problems as mentioned above. When swapping
to a different cluster, you can run `module purge` to unload all modules
@@ -613,9 +600,10 @@ to avoid problems (see [Purging all modules](../running_batch_jobs/#purging-all-
When using a tool that is made available via modules to submit jobs, for example [Worker](multi_job_submission.md),
you may run into the following error when targeting a non-default cluster:
-$ wsub
-/apps/gent/.../.../software/worker/.../bin/wsub: line 27: 2152510 Illegal instruction (core dumped) ${PERL} ${DIR}/../lib/wsub.pl "$@"
-
+```
+$ wsub
+/apps/gent/.../.../software/worker/.../bin/wsub: line 27: 2152510 Illegal instruction (core dumped) ${PERL} ${DIR}/../lib/wsub.pl "$@"
+```
When executing the `module swap cluster` command, you are not only changing your session environment to submit
to that specific cluster, but also to use the part of the central software stack that is specific to that cluster.
@@ -630,9 +618,13 @@ The same goes for the other clusters as well of course.
!!! Tip
To submit a Worker job to a specific cluster, like the [`donphan` interactive cluster](interactive_debug.md) for instance, use:
- $ module swap env/slurm/donphan
+ ```
+ $ module swap env/slurm/donphan
+ ```
instead of
- $ module swap cluster/donphan
+ ```
+ $ module swap cluster/donphan
+ ```
We recommend using a `module swap cluster` command after submitting the jobs.
diff --git a/mkdocs/docs/HPC/useful_linux_commands.md b/mkdocs/docs/HPC/useful_linux_commands.md
index 258dfa70d25..afaa87575e9 100644
--- a/mkdocs/docs/HPC/useful_linux_commands.md
+++ b/mkdocs/docs/HPC/useful_linux_commands.md
@@ -6,73 +6,32 @@ All the {{hpc}} clusters run some variant of the "{{operatingsystembase}}" opera
that, when you connect to one of them, you get a command line interface,
which looks something like this:
-{{userid}}@ln01[203] $
-
+```
+{{userid}}@ln01[203] $
+```
When you see this, we also say you are inside a "shell". The shell will
accept your commands, and execute them.
-
-
-
- ls
- |
-
- Shows you a list of files in the current directory
- |
-
-
-
- cd
- |
-
- Change current working directory
- |
-
-
-
- rm
- |
-
- Remove file or directory
- |
-
-{% if site == gent %}
-
-
- nano
- |
-
- Text editor
- |
-
-{% else %}
-
-
- joe
- |
-
- Text editor
- |
-
+| Command | Description |
+|---------|----------------------------------------------------|
+| `ls` | Shows you a list of files in the current directory |
+| `cd` | Change current working directory |
+| `rm` | Remove file or directory |
+| `echo` | Prints its parameters to the screen |
+{% if site == gent %} | `nano` | Text editor |
+{% else %} | `joe` | Text editor |
{% endif %}
-
-
- echo
- |
-
- Prints its parameters to the screen
- |
-
-
+
Most commands will accept or even need parameters, which are placed
after the command, separated by spaces. A simple example with the "echo"
command:
-$ echo This is a test
+```
+$ echo This is a test
This is a test
-
+```
Important here is the "$" sign in front of the first line. This should
not be typed, but is a convention meaning "the rest of this line should
@@ -84,10 +43,11 @@ explained then if necessary. If not, you can usually get more
information about a command, say the item or command "ls", by trying
either of the following:
-$ ls --help
-$ man ls
-$ info ls
-
+```
+$ ls --help
+$ man ls
+$ info ls
+```
(You can exit the last two "manuals" by using the "q" key.) For more
exhaustive tutorials about Linux usage, please refer to the following
@@ -125,38 +85,43 @@ hostname
You can type both lines at your shell prompt, and the result will be the
following:
-$ echo "Hello! This is my hostname:"
+```
+$ echo "Hello! This is my hostname:"
Hello! This is my hostname:
-$ hostname
+$ hostname
{{loginhost}}
-
+```
Suppose we want to call this script "foo". You open a new file for
editing, and name it "foo", and edit it with your favourite editor
{% if site == gent %}
-$ nano foo
-
+```
+nano foo
+```
{% else %}
-$ vi foo
-
+```
+$ vi foo
+```
{% endif %}
or use the following commands:
-$ echo "echo Hello! This is my hostname:" > foo
-$ echo hostname >> foo
-
+```
+echo "echo Hello! This is my hostname:" > foo
+echo hostname >> foo
+```
The easiest ways to run a script is by starting the interpreter and pass
the script as parameter. In case of our script, the interpreter may
either be "sh" or "bash" (which are the same on the cluster). So start
the script:
-$ bash foo
+```
+$ bash foo
Hello! This is my hostname:
{{loginhost}}
-
+```
Congratulations, you just created and started your first shell script!
@@ -171,9 +136,10 @@ the following line on top of your shell script
You can find this path with the "which" command. In our case, since we
use bash as an interpreter, we get the following path:
-$ which bash
+```
+$ which bash
/bin/bash
-
+```
We edit our script and change it with this information:
@@ -188,15 +154,17 @@ script.
Finally, we tell the operating system that this script is now
executable. For this we change its file attributes:
-$ chmod +x foo
-
+```
+chmod +x foo
+```
Now you can start your script by simply executing it:
-$ ./foo
+```
+$ ./foo
Hello! This is my hostname:
{{loginhost}}
-
+```
The same technique can be used for all other scripting languages, like
Perl and Python.
@@ -209,400 +177,99 @@ not ignore these lines, you may get strange results ...
### Archive Commands
-
-
-
- tar
- |
-
- An archiving program designed to store and extract files from an archive known as a tar file.
- |
-
-
-
- tar -cvf foo.tar foo/
- |
-
- compress the contents of foo folder to foo.tar
- |
-
-
-
- tar -xvf foo.tar
- |
-
- extract foo.tar
- |
-
-
-
- tar -xvzf foo.tar.gz
- |
-
- extract gzipped foo.tar.gz
- |
-
-
+| Command | Description |
+|-------------------------|-----------------------------------------------------------------------------------------------|
+| `tar` | An archiving program designed to store and extract files from an archive known as a tar file. |
+| `tar -cvf foo.tar foo/` | Compress the contents of `foo` folder to `foo.tar` |
+| `tar -xvf foo.tar` | Extract `foo.tar` |
+| `tar -xvzf foo.tar.gz` | Extract gzipped `foo.tar.gz` |
+
### Basic Commands
-
-
-
- ls
- |
-
- Shows you a list of files in the current directory
- |
-
-
-
- cd
- |
-
- Change the current directory
- |
-
-
-
- rm
- |
-
- Remove file or directory
- |
-
-
-
- mv
- |
-
- Move file or directory
- |
-
-
-
- echo
- |
-
- Display a line or text
- |
-
-
-
- pwd
- |
-
- Print working directory
- |
-
-
-
- mkdir
- |
-
- Create directories
- |
-
-
-
- rmdir
- |
-
- Remove directories
- |
-
-
+| Command | Description |
+|---------|----------------------------------------------------|
+| `ls` | Shows you a list of files in the current directory |
+| `cd` | Change the current directory |
+| `rm` | Remove file or directory |
+| `mv` | Move file or directory |
+| `echo` | Display a line or text |
+| `pwd` | Print working directory |
+| `mkdir` | Create directories |
+| `rmdir` | Remove directories |
### Editor
-
-
-
- emacs
- |
-
-
- |
-
-
-
- nano
- |
-
- Nano's ANOther editor, an enhanced free Pico clone
- |
-
-
-
- vi
- |
-
- A programmers text editor
- |
-
-
+| Command | Description |
+|---------|----------------------------------------------------|
+| `emacs` | |
+| `nano` | Nano's ANOther editor, an enhanced free Pico clone |
+| `vi` | A programmer's text editor |
+
### File Commands
-
-
-
- cat
- |
-
- Read one or more files and print them to standard output
- |
-
-
-
- cmp
- |
-
- Compare two files byte by byte
- |
-
-
-
- cp
- |
-
- Copy files from a source to the same or different target(s)
- |
-
-
-
- du
- |
-
- Estimate disk usage of each file and recursively for directories
- |
-
-
-
- find
- |
-
- Search for files in directory hierarchy
- |
-
-
-
- grep
- |
-
- Print lines matching a pattern
- |
-
-
-
- ls
- |
-
- List directory contents
- |
-
-
-
- mv
- |
-
- Move file to different targets
- |
-
-
-
- rm
- |
-
- Remove files
- |
-
-
-
- sort
- |
-
- Sort lines of text files
- |
-
-
-
- wc
- |
-
- Print the number of new lines, words, and bytes in files
- |
-
-
+| Command | Description |
+|---------|------------------------------------------------------------------|
+| `cat` | Read one or more files and print them to standard output |
+| `cmp` | Compare two files byte by byte |
+| `cp` | Copy files from a source to the same or different target(s) |
+| `du` | Estimate disk usage of each file and recursively for directories |
+| `find` | Search for files in directory hierarchy |
+| `grep` | Print lines matching a pattern |
+| `ls` | List directory contents |
+| `mv` | Move file to different targets |
+| `rm` | Remove files |
+| `sort` | Sort lines of text files |
+| `wc` | Print the number of new lines, words, and bytes in files |
+
### Help Commands
-
-
-
- man
- |
-
- Displays the manual page of a command with its name, synopsis, description, author, copyright etc.
- |
-
-
+| Command | Description |
+|---------|-----------------------------------------------------------------------------------------------------|
+| `man` | Displays the manual page of a command with its name, synopsis, description, author, copyright, etc. |
+
### Network Commands
-
-
-
- hostname
- |
-
- show or set the system's host name
- |
-
-
-
- ifconfig
- |
-
- Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask etc.
- |
-
-
-
- ping
- |
-
- send ICMP ECHO_REQUEST to network hosts, you will get back ICMP packet if the host responds. This command is useful when you are in a doubt whether your computer is connected or not.
- |
-
-
+| Command | Description |
+|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `hostname` | Show or set the system's host name |
+| `ifconfig` | Display the current configuration of the network interface. It is also useful to get the information about IP address, subnet mask, set remote IP address, netmask, etc. |
+| `ping` | Send ICMP ECHO_REQUEST to network hosts. You will get back an ICMP packet if the host responds. This command is useful to check whether your computer is connected or not. |
+
### Other Commands
-
-
-
- logname
- |
-
- Print user's login name
- |
-
-
-
- quota
- |
-
- Display disk usage and limits
- |
-
-
-
- which
- |
-
- Returns the pathnames of the files that would be executed in the current environment
- |
-
-
-
- whoami
- |
-
- Displays the login name of the current effective user
- |
-
-
+| Command | Description |
+|-----------|--------------------------------------------------------------------------------------|
+| `logname` | Print user's login name |
+| `quota` | Display disk usage and limits |
+| `which` | Returns the pathnames of the files that would be executed in the current environment |
+| `whoami` | Displays the login name of the current effective user |
+
### Process Commands
-
-
-
- &
- |
-
- In order to execute a command in the background, place an ampersand (&) on the command line at the end of the command. A user job number (placed in brackets) and a system process number are displayed. A system process number is the number by which the system identifies the job whereas a user job number is the number by which the user identifies the job
- |
-
-
-
- at
- |
-
- executes commands at a specified time
- |
-
-
-
- bg
- |
-
- Places a suspended job in the background
- |
-
-
-
- crontab
- |
-
- crontab is a file which contains the schedule of entries to run at specified times
- |
-
-
-
- fg
- |
-
- A process running in the background will be processed in the foreground
- |
-
-
-
- jobs
- |
-
- Lists the jobs being run in the background
- |
-
-
-
- kill
- |
-
- Cancels a job running in the background, it takes argument either the user job number or the system process number
- |
-
-
-
- ps
- |
-
- Reports a snapshot of the current processes
- |
-
-
-
- top
- |
-
- Display Linux tasks
- |
-
-
+| Command | Description |
+|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `&` | In order to execute a command in the background, place an ampersand (`&`) at the end of the command line. A user job number (in brackets) and a system process number are displayed. The system process number identifies the job, while the user job number is used by the user. |
+| `at` | Executes commands at a specified time |
+| `bg` | Places a suspended job in the background |
+| `crontab` | A file which contains the schedule of entries to run at specified times |
+| `fg` | A process running in the background will be processed in the foreground |
+| `jobs` | Lists the jobs being run in the background |
+| `kill` | Cancels a job running in the background; it takes either the user job number or the system process number as an argument |
+| `ps` | Reports a snapshot of the current processes |
+| `top` | Displays Linux tasks |
+
### User Account Commands
-
-
-
- chmod
- |
-
- Modify properties for users
- |
-
-
\ No newline at end of file
+| Command | Description |
+|---------|------------------------------------|
+| `chmod` | Modify properties for users |
diff --git a/mkdocs/docs/HPC/web_portal.md b/mkdocs/docs/HPC/web_portal.md
index 9c95a20f72d..2ea0b15bd78 100644
--- a/mkdocs/docs/HPC/web_portal.md
+++ b/mkdocs/docs/HPC/web_portal.md
@@ -12,11 +12,7 @@ required to connect to your VSC account via this web portal.\
Please note that we do recommend to use our interactive and debug
cluster (see chapter [interactive and debug cluster](./interactive_debug.md)) with `OoD`.
-To connect to the HPC-UGent infrastructure via the web portal, visit:
-
-
-
-
+To connect to the HPC-UGent infrastructure via the web portal, visit
Note that you may only see a "*Submitting...*" message appear for a
couple of seconds, which is perfectly normal.
@@ -66,9 +62,7 @@ requested to let the web portal access some of your personal information
(VSC login ID, account status, login shell and institute name), as shown
in this screenshot below:
-
![image](img/ood_permission.png)
-
**Please click "Authorize" here.**
@@ -79,9 +73,7 @@ afterwards.
Once logged in, you should see this start page:
-
![image](img/ood_start.png)
-
This page includes a menu bar at the top, with buttons on the left
providing access to the different features supported by the web portal,
@@ -92,9 +84,7 @@ high-level overview of the HPC-UGent Tier-2 clusters.
If your browser window is too narrow, the menu is available at the top
right through the "hamburger" icon:
-
![image](img/ood_hamburger.png)
-
## Features
@@ -112,9 +102,7 @@ The drop-down menu provides short-cuts to the different `$VSC_*`
directories and filesystems you have access to. Selecting one of the
directories will open a new browser tab with the *File Explorer*:
-
![image](img/ood_file_explorer.png)
-
Here you can:
@@ -188,9 +176,7 @@ Jobs* menu item under *Jobs*.
A new browser tab will be opened that shows all your current queued
and/or running jobs:
-
![image](img/ood_active_jobs.png)
-
You can control which jobs are shown using the *Filter* input area, or
select a particular cluster from the drop-down menu *All Clusters*, both
@@ -213,9 +199,7 @@ To submit new jobs, you can use the *Job Composer* menu item under
*Jobs*. This will open a new browser tab providing an interface to
create new jobs:
-
![image](img/ood_job_composer.png)
-
This extensive interface allows you to create jobs from one of the
available templates, or by copying an existing job.
@@ -232,9 +216,7 @@ Don't forget to actually submit your job to the system via the green
In addition, you can inspect provided job templates, copy them or even
create your own templates via the *Templates* button on the top:
-
![image](img/ood_job_templates.png)
-
### Shell access
@@ -242,9 +224,7 @@ Through the *Shell Access* button that is available under the *Clusters*
menu item, you can easily open a terminal (shell) session into your VSC
account, straight from your browser!
-
![image](img/ood_shell.png)
-
Using this interface requires being familiar with a Linux shell
environment (see
@@ -263,9 +243,7 @@ terminal multiplexer tool like `screen` or `tmux`).
To create a graphical desktop environment, use on of the *desktop on... node* buttons under *Interactive Apps* menu item. For example:
-
![image](img/ood_launch_desktop.png)
-
You can either start a desktop environment on a login node for some
lightweight tasks, or on a workernode of one of the HPC-UGent Tier-2
@@ -279,9 +257,7 @@ To access the desktop environment, click the *My Interactive Sessions*
menu item at the top, and then use the *Launch desktop on \... node*
button if the desktop session is *Running*:
-
![image](img/ood_desktop_running.png)
-
#### Jupyter notebook
@@ -289,48 +265,36 @@ Through the web portal you can easily start a [Jupyter
notebook](https://jupyter.org/) on a workernode, via the *Jupyter
Notebook* button under the *Interactive Apps* menu item.
-
![image](img/ood_start_jupyter.png)
-
After starting the Jupyter notebook using the *Launch* button, you will
see it being added in state *Queued* in the overview of interactive
sessions (see *My Interactive Sessions* menu item):
-
![image](img/ood_jupyter_queued.png)
-
When your job hosting the Jupyter notebook starts running, the status
will first change the *Starting*:
-
![image](img/ood_jupyter_starting.png)
-
and eventually the status will change to *Running*, and you will be able
to connect to the Jupyter environment using the blue *Connect to
Jupyter* button:
-
![image](img/ood_jupyter_running.png)
-
This will launch the Jupyter environment in a new browser tab, where you
can open an existing notebook by navigating to the directory where it
located and clicking it, or using the *New* menu on the top right:
-
![image](img/ood_jupyter_new_notebook.png)
-
Here's an example of a Jupyter notebook in action. Note that several
non-standard Python packages (like *numpy*, *scipy*, *pandas*,
*matplotlib*) are readily available:
-
![image](img/ood_jupyter_notebook_example.png)
-
## Restarting your web server in case of problems
@@ -340,9 +304,7 @@ web server running in your VSC account.
You can do this via the *Restart Web Server* button under the *Help*
menu item:
-
![image](img/ood_help_restart_web_server.png)
-
Of course, this only affects your own web portal session (not those of
others).
diff --git a/mkdocs/docs/HPC/x2go.md b/mkdocs/docs/HPC/x2go.md
index f2daea36ff9..eb9af197d15 100644
--- a/mkdocs/docs/HPC/x2go.md
+++ b/mkdocs/docs/HPC/x2go.md
@@ -54,9 +54,7 @@ There are two ways to connect to the login node:
This is the easier way to setup X2Go, a direct connection to the login
node.
-
![image](img/ch19-x2go-configuration-gent.png)
-
1. Include a session name. This will help you to identify the session
@@ -76,9 +74,7 @@ node.
1. Click on the "Use RSA/DSA.." folder icon. This will open a file
browser.
-
![image](img/ch19-x2go-ssh-key.png)
-
{% if OS == (macos or linux) %}
2. You should look for your **private** SSH key generated in [Generating a public/private key pair](../account/#generating-a-publicprivate-key-pair). This file has
been stored in the directory "*~/.ssh/*" (by default "**id_rsa**").
@@ -109,9 +105,7 @@ node.
copy-pasting support.
{% endif %}
-
![image](img/ch19-x2go-configuration-xterm.png)
-
1. **[optional]:** Change the session icon.
@@ -123,9 +117,7 @@ This option is useful if you want to resume a previous session or if you
want to set explicitly the login node to use. In this case you should
include a few more options. Use the same **Option A** setup but with these changes:
-
![image](img/ch19-x2go-configuration-gent-proxy.png)
-
1. Include a session name. This will help you to identify the session
if you have more than one (in our example "HPC UGent proxy login").
@@ -146,9 +138,7 @@ include a few more options. Use the same **Option A** setup but with these chang
did for the server configuration (The "RSA/DSA key" field must
be set in both sections)
-
![image](img/ch19-x2go-proxy-key.png)
-
4. Click the ++"OK"++ button after these changes.
@@ -161,9 +151,7 @@ open session or if you click on the "shutdown" button from X2Go. If you
want to suspend your session to continue working with it later just
click on the "pause" icon.
-
![image](img/ch19-x2go-pause.png)
-
X2Go will keep the session open for you (but only if the login node is
not rebooted).
@@ -175,8 +163,9 @@ session, you should know which login node were used at first place. You
can get this information before logging out from your X2Go session. Just
open a terminal and execute:
-$ hostname
-
+```
+hostname
+```
![image](img/ch19-x2go-xterm.png)
diff --git a/mkdocs/docs/HPC/xdmod.md b/mkdocs/docs/HPC/xdmod.md
index 725f3aa7996..4c510bae823 100644
--- a/mkdocs/docs/HPC/xdmod.md
+++ b/mkdocs/docs/HPC/xdmod.md
@@ -4,11 +4,7 @@ The XDMoD web portal provides information about completed jobs, storage
usage and the HPC UGent cloud infrastructure usage.
To connect to the XDMoD portal, turn on your VPN connection to UGent and
-visit
-
-
-
-
+visit .
Note that you may need to authorise XDMoD to obtain information from
your VSC account through the VSC accountpage.
@@ -18,8 +14,5 @@ web application shows you several features through a series of tips.
Located in the upper right corner of the web page is the help button,
taking you to the XDMoD User Manual. As things may change, we recommend
-checking out the provided documenation for information on XDMoD use:
-
-
-
-
+checking out the provided documenation for information on XDMoD use
+.
diff --git a/mkdocs/docs/macros/firsttimeconnection.md b/mkdocs/docs/macros/firsttimeconnection.md
index 22b390a34a5..0e0f3d0ba63 100644
--- a/mkdocs/docs/macros/firsttimeconnection.md
+++ b/mkdocs/docs/macros/firsttimeconnection.md
@@ -3,7 +3,10 @@ Alert will appear and you will be asked to verify the authenticity of the
login node.
Make sure the fingerprint in the alert matches one of the following:
-{{ puttyFirstConnect }}
+
+```
+{{ puttyFirstConnect }}
+```
If it does, press ***Yes***, if it doesn't, please contact {{ hpcinfo }}.