From ded0f450f8fb2c16e4399491d8cdb910f96e1f28 Mon Sep 17 00:00:00 2001 From: Lukas Barragan Torres Date: Fri, 5 Jul 2024 16:44:49 +0200 Subject: [PATCH] fixed typos in multi job submission docs --- mkdocs/docs/HPC/multi_job_submission.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/mkdocs/docs/HPC/multi_job_submission.md b/mkdocs/docs/HPC/multi_job_submission.md index 3f8fa2c7f64..e568ecfd2b5 100644 --- a/mkdocs/docs/HPC/multi_job_submission.md +++ b/mkdocs/docs/HPC/multi_job_submission.md @@ -22,7 +22,7 @@ huge amounts of small jobs will create a lot of overhead, and can slow down the whole cluster. It would be better to bundle those jobs in larger sets. In TORQUE, an experimental feature known as "*job arrays*" existed to allow the creation of multiple jobs with one *qsub* command, -but is was not supported by Moab, the current scheduler. +but is not supported by Moab, the current scheduler. The "**Worker framework**" has been developed to address this issue. @@ -50,7 +50,7 @@ scenario that can be reduced to a **MapReduce** approach.[^1] First go to the right directory:
$ cd ~/examples/Multi-job-submission/par_sweep
-Suppose the program the user wishes to run the "*weather*" program, +Suppose the user wishes to run the "*weather*" program, which takes three parameters: a temperature, a pressure and a volume. A typical call of the program looks like:
$ ./weather -t 20 -p 1.05 -v 4.3
@@ -366,7 +366,7 @@ This will summarise the log file every 60 seconds.
 
 ### Time limits for work items
 
-Sometimes, the execution of a work item takes long than expected, or
+Sometimes, the execution of a work item takes longer than expected, or
 worse, some work items get stuck in an infinite loop. This situation is
 unfortunate, since it implies that work items that could successfully
 execute are not even started. Again, the Worker framework offers a
@@ -392,8 +392,7 @@ it can be used outside the Worker framework as well.
 
 ### Resuming a Worker job
 
-Unfortunately, it is not always easy to estimate the walltime for a job,
-and consequently, sometimes the latter is underestimated. When using the
+Unfortunately, walltime is sometimes underestimated. When using the
 Worker framework, this implies that not all work items will have been
 processed. Worker makes it very easy to resume such a job without having
 to figure out which work items did complete successfully, and which