diff --git a/.github/workflows/detect_html.yml b/.github/workflows/detect_html.yml new file mode 100644 index 00000000000..5f679e08c82 --- /dev/null +++ b/.github/workflows/detect_html.yml @@ -0,0 +1,21 @@ +name: HTML tag detection + +on: [push, pull_request] + +jobs: + html-check: + runs-on: ubuntu-20.04 + steps: + - name: Checkout code + uses: actions/checkout@v3 + + - name: Check if no HTML tags are present in markdown files located at mkdocs/docs/HPC/ + run: | + # -n: show line number + # -H: show filename + # -o: show only the matching part of the line + if grep -nHo ']*>' mkdocs/docs/HPC/*.md; then + exit 1 + else + echo "No HTML tags detected in Markdown files." + fi diff --git a/mkdocs/docs/HPC/jupyter.md b/mkdocs/docs/HPC/jupyter.md index b57d52484d0..3f6776b38e7 100644 --- a/mkdocs/docs/HPC/jupyter.md +++ b/mkdocs/docs/HPC/jupyter.md @@ -10,33 +10,33 @@ A [Jupyter notebook](https://jupyter.org/) is an interactive, web-based environm Through the [HPC-UGent web portal](web_portal.md) you can easily start a Jupyter notebook on a workernode, via the *Jupyter Notebook* button under the *Interactive Apps* menu item. -
+ ![image](img/ood_start_jupyter.png) -
+ After starting the Jupyter notebook using the *Launch* button, you will see it being added in state *Queued* in the overview of interactive sessions (see *My Interactive Sessions* menu item): -
+ ![image](img/ood_jupyter_queued.png) -
+ When your job hosting the Jupyter notebook starts running, the status will first change the *Starting*: -
+ ![image](img/ood_jupyter_starting.png) -
+ and eventually the status will change to *Running*, and you will be able to connect to the Jupyter environment using the blue *Connect to Jupyter* button: -
+ ![image](img/ood_jupyter_running.png) -
+ This will launch the Jupyter environment in a new browser tab, where you can open an existing notebook by navigating to the directory where it is located and clicking it. You can also create a new notebook by clicking on `File`>`New`>`Notebook`: -
+ ![image](img/ood_jupyter_new_notebook.png) -
+ ### Using extra Python packages @@ -45,9 +45,7 @@ The first thing we need to do is finding the modules that contain our package of To find the appropriate modules, it is recommended to use the shell within the web portal under `Clusters`>`>_login Shell Access`. -
![image](img/ood_open_shell.png) -
We can see all available versions of the SciPy module by using `module avail SciPy-bundle`: @@ -65,9 +63,9 @@ $ module avail SciPy-bundle Not all modules will work for every notebook, we need to use the one that uses the same toolchain as the notebook we want to launch. To find that toolchain, we can look at the `JupyterNotebook version` field when creating a notebook. In our example `7.2.0` is the version of the notebook and `GCCcore/13.2.0` is the toolchain used. -
+ ![image](img/ood_jupyter_version.png) -
+ Module names include the toolchain that was used to install the module (for example `gfbf-2023b` in `SciPy-bundle/2023.11-gfbf-2023b` means that that module uses the toolchain `gfbf/2023`). To see which modules are compatible with each other, you can check the table on the [page about Module conflicts](troubleshooting.md#module-conflicts). Another way to find out which `GCCcore` subtoolchain goes with the particular toolchain of the module (such as `gfbf/2023b`) is to use `module show`. In particular using `module show | grep GCC` (before the module has been loaded) will return this `GCCcore` version. @@ -101,6 +99,5 @@ Lmod has detected the following error: ... Now that we found the right module for the notebook, add `module load ` in the `Custom code` field when creating a notebook and you can make use of the packages within that notebook. -
+ ![image](img/ood_jupyter_custom_code.png) -
\ No newline at end of file diff --git a/mkdocs/docs/HPC/torque_options.md b/mkdocs/docs/HPC/torque_options.md index cc7da6c4812..6cf35c7c077 100644 --- a/mkdocs/docs/HPC/torque_options.md +++ b/mkdocs/docs/HPC/torque_options.md @@ -4,20 +4,21 @@ Below is a list of the most common and useful directives. -| Option | System type | Description | -|:---------:|:-------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| -k | All | Send "stdout" and/or "stderr" to your home directory when the job runs
**#PBS -k o** or **#PBS -k e** or **#PBS -koe**
| -| -l | All | Precedes a resource request, e.g., processors, wallclock | -| -M | All | Send an e-mail messages to an alternative e-mail address
**#PBS -M me@mymail.be**
| -| -m | All | Send an e-mail address when a job **b**egins execution and/or **e**nds or **a**borts
**#PBS -m b** or **#PBS -m be** or **#PBS -m ba** | -| mem | Shared Memory | Memory & Specifies the amount of memory you need for a job.
**#PBS -I mem=90gb** | -| mpiproces | Clusters | Number of processes per node on a cluster. This should equal number of processors on a node in most cases.
**#PBS -l mpiprocs=4** | -| -N | All | Give your job a unique name
**#PBS -N galaxies1234**
| -| -ncpus | Shared Memory | The number of processors to use for a shared memory job.
**#PBS ncpus=4**
| -| -r | All | ontrol whether or not jobs should automatically re-run from the start if the system crashes or is rebooted. Users with check points might not wish this to happen.
**#PBS -r n**

**#PBS -r y**
| -| select | Clusters | Number of compute nodes to use. Usually combined with the mpiprocs directive
**#PBS -l select=2**
| -| -V | All | Make sure that the environment in which the job **runs** is the same as the environment in which it was **submitted
#PBS -V
** | -| Walltime | All | The maximum time a job can run before being stopped. If not used a default of a few minutes is used. Use this flag to prevent jobs that go bad running for hundreds of hours. Format is HH:MM:SS
**#PBS -l walltime=12:00:00**
| +| Option | System type | Description | Jobscript comment | +|:---------:|:-------------:|:---------------------------------------------------------------------------------------------------------|:--------------------------------------------------| +| -k | All | Send "stdout" and/or "stderr" to your home directory when the job runs | **#PBS -k o** or **#PBS -k e** or **#PBS -koe** | +| -l | All | Precedes a resource request, e.g., processors, wallclock | | +| -M | All | Send an e-mail message to an alternative e-mail address | **#PBS -M me@mymail.be** | +| -m | All | Send an e-mail when a job begins execution, ends, or aborts | **#PBS -m b** or **#PBS -m be** or **#PBS -m ba** | +| mem | Shared Memory | Memory & Specifies the amount of memory you need for a job. | **#PBS -I mem=90gb** | +| mpiproces | Clusters | Number of processes per node on a cluster. This usually equals the number of processors on a node. | **#PBS -l mpiprocs=4** | +| -N | All | Give your job a unique name | **#PBS -N galaxies1234** | +| -ncpus | Shared Memory | The number of processors to use for a shared memory job. | **#PBS ncpus=4** | +| -r | All | Control whether or not jobs should automatically re-run from the start if the system crashes or reboots. | **#PBS -r n** or **#PBS -r y** | +| select | Clusters | Number of compute nodes to use. Usually combined with the mpiprocs directive | **#PBS -l select=2** | +| -V | All | Ensure that the environment in which the job runs is the same as the one in which it was submitted | **#PBS -V** | +| Walltime | All | Maximum time a job can run before being stopped. Format is HH:MM:SS | **#PBS -l walltime=12:00:00** | + ## Environment Variables in Batch Job Scripts