Skip to content

Commit

Permalink
Deployed d06515d with MkDocs version: 1.6.0
Browse files Browse the repository at this point in the history
  • Loading branch information
Unknown committed Aug 13, 2024
1 parent 7162fd6 commit dfc07c0
Show file tree
Hide file tree
Showing 4 changed files with 68 additions and 6 deletions.
8 changes: 5 additions & 3 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -1138,9 +1138,11 @@ <h1 id="cirrus">Cirrus</h1>
information on how to get access to the system please see the <a href="http://www.cirrus.ac.uk">Cirrus
website</a>.</p>
<p>The Cirrus facility is based around an SGI ICE XA system. There are 280
standard compute nodes and 38 GPU compute nodes. Each standard compute
node has 256 GiB of memory and contains two 2.1 GHz, 18-core Intel Xeon
(Broadwell) processors. Each GPU compute node has 384 GiB of memory,
standard compute nodes, 1 high memory compute node and 38 GPU compute
nodes. Each standard compute node has 256 GiB of memory and contains two
2.1 GHz, 18-core Intel Xeon (Broadwell) processors. Each high memory
compute node has 3 TiB of memory and contains four 2.7 GHz, 28-core Intel
Xeon (Platinum) processors. Each GPU compute node has 384 GiB of memory,
contains two 2.4 GHz, 20-core Intel Xeon (Cascade Lake) processors and
four NVIDIA Tesla V100-SXM2-16GB (Volta) GPU accelerators connected to
the host processors and each other via PCIe. All nodes are connected
Expand Down
2 changes: 1 addition & 1 deletion search/search_index.json

Large diffs are not rendered by default.

Binary file modified sitemap.xml.gz
Binary file not shown.
64 changes: 62 additions & 2 deletions user-guide/batch/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -537,6 +537,15 @@
</span>
</a>

</li>

<li class="md-nav__item">
<a href="#primary-resources-on-high-memory-cpu-compute-nodes" class="md-nav__link">
<span class="md-ellipsis">
Primary resources on high memory (CPU) compute nodes
</span>
</a>

</li>

<li class="md-nav__item">
Expand Down Expand Up @@ -1598,6 +1607,15 @@
</span>
</a>

</li>

<li class="md-nav__item">
<a href="#primary-resources-on-high-memory-cpu-compute-nodes" class="md-nav__link">
<span class="md-ellipsis">
Primary resources on high memory (CPU) compute nodes
</span>
</a>

</li>

<li class="md-nav__item">
Expand Down Expand Up @@ -2059,6 +2077,33 @@ <h3 id="primary-resources-on-gpu-nodes">Primary resources on GPU nodes</h3>
the entire node, <em>i.e.</em>, 4 GPUs, even if you don't request all the GPUs
in your submission script.</p>
</div>
<h3 id="primary-resources-on-high-memory-cpu-compute-nodes">Primary resources on high memory (CPU) compute nodes</h3>
<p>The <em>primary resource</em> you request on the high memory compute node is CPU
cores. The maximum amount of memory you are allocated is computed as the
number of CPU cores you requested multiplied by 1/112th of the total
memory available (as there are 112 CPU cores per node). So, if you
request the full node (112 cores), then you will be allocated a maximum
of all of the memory (3 TB) available on the node; however, if you
request 1 core, then you will be assigned a maximum of 3000/112 = 26.8 GB
of the memory available on the node.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Using the <code>--exclusive</code> option in jobs will give you access to the full
node memory even if you do not explicitly request all of the CPU cores
on the node.</p>
</div>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Using the <code>--exclusive</code> option will charge your account for the usage of
the entire node, even if you don't request all the cores in your
scripts.</p>
</div>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>You will not generally have access to the full amount of memory resource
on the the node as some is retained for running the operating system and
other system processes.</p>
</div>
<h3 id="partitions">Partitions</h3>
<p>On Cirrus, compute nodes are grouped into partitions. You will have to
specify a partition using the <code>--partition</code> option in your submission
Expand All @@ -2075,13 +2120,19 @@ <h3 id="partitions">Partitions</h3>
<tbody>
<tr>
<td>standard</td>
<td>CPU nodes with 2x 18-core Intel Broadwell processors</td>
<td>CPU nodes with 2x 18-core Intel Broadwell processors, 256 GB memory</td>
<td>352</td>
<td></td>
</tr>
<tr>
<td>highmem</td>
<td>CPU node with 4x 28-core Intel Xeon Platinum processors, 3 TB memory</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>gpu</td>
<td>GPU nodes with 4x Nvidia V100 GPU and 2x 20-core Intel Cascade Lake processors</td>
<td>GPU nodes with 4x Nvidia V100 GPU and 2x 20-core Intel Cascade Lake processors, 384 GB memory</td>
<td>36</td>
<td></td>
</tr>
Expand Down Expand Up @@ -2121,6 +2172,15 @@ <h3 id="quality-of-service-qos">Quality of Service (QoS)</h3>
<td></td>
</tr>
<tr>
<td>highmem</td>
<td>1 job</td>
<td>2 jobs</td>
<td>24 hours</td>
<td>1 node</td>
<td>highmem</td>
<td></td>
</tr>
<tr>
<td>largescale</td>
<td>1 job</td>
<td>4 jobs</td>
Expand Down

0 comments on commit dfc07c0

Please sign in to comment.