From 0b04e45ca136ca2c5d390c8668a203b1a9287abd Mon Sep 17 00:00:00 2001 From: Lukas Barragan Torres Date: Fri, 5 Jul 2024 13:17:30 +0200 Subject: [PATCH] fix typos --- mkdocs/docs/HPC/fine_tuning_job_specifications.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/mkdocs/docs/HPC/fine_tuning_job_specifications.md b/mkdocs/docs/HPC/fine_tuning_job_specifications.md index e9629915fea..0bec9630c39 100644 --- a/mkdocs/docs/HPC/fine_tuning_job_specifications.md +++ b/mkdocs/docs/HPC/fine_tuning_job_specifications.md @@ -14,7 +14,7 @@ can slow down the run time of your application, but also block {{hpc}} resources for other users. Specifying the "optimal" Job Parameters requires some knowledge of your -application (e.g., how many parallel threads does my application uses, +application (e.g., how many parallel threads does my application use, is there a lot of inter-process communication, how much memory does my application need) and also some knowledge about the {{hpc}} infrastructure (e.g., what kind of multi-core processors are available, which nodes @@ -78,7 +78,7 @@ taken to be on the safe side. It is also wise to check the walltime on different compute nodes or to select the "slowest" compute node for your walltime tests. Your estimate -should appropriate in case your application will run on the "slowest" +should be appropriate in case your application will run on the "slowest" (oldest) compute nodes. The walltime can be specified in a job scripts as: @@ -184,7 +184,7 @@ Whereby: 3. The third column shows the memory utilisation, expressed in percentages of the full available memory. At full memory consumption, 19.2% of the memory was being used by our application. - With the *"free"* command, we have previously seen that we had a + With the `free` command, we have previously seen that we had a node of 16 GB in this example. 3 GB is indeed more or less 19.2% of the full available memory. 4. The fourth column shows you the CPU utilisation, expressed in @@ -240,7 +240,7 @@ htop horizontally to see all processes and their full command lines.
$ top
-$ htot
+$ htop ### Setting the memory parameter {: #pbs_mem } @@ -295,7 +295,7 @@ are working at full load. The number of core and nodes that a user shall request fully depends on the architecture of the application. Developers design their -applications with a strategy for parallelisation in mind. The +applications with a strategy for parallelization in mind. The application can be designed for a certain fixed number or for a configurable number of nodes and cores. It is wise to target a specific set of compute nodes (e.g., Westmere, Harpertown) for your computing