Skip to content

Commit

Permalink
Merge pull request #809 from marinakraeva/patch-18
Browse files Browse the repository at this point in the history
Update compute.md
  • Loading branch information
MoeRichert-USDA authored Jan 7, 2025
2 parents a7b9135 + 6a3c5a7 commit 5de1675
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion pages/about/compute.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ subnav:
---

## Ceres HPC Cluster | Ames, IA
Ceres is an ARS-owned high-performance computing (HPC) cluster connected to SCINet and located in Ames, IA. The original cluster build included 72 regular compute nodes, 5 high memory nodes, and a two Petabyte file system for a range of scientific applications. The cluster has been extended multiple times and currently has 196 regular compute nodes, 26 high-memory nodes and one GPU node. A small subset of nodes, called “priority” nodes, has been funded by Research Units. “Priority” nodes are available to all ARS users when not in use by their funders. The ['Technical Overview' in the Ceres User Manual]({{ site.baseurl }}/guides/resources/ceres#technical-overview) describes the number of logical cores and available memory for each type of node.
Ceres is an ARS-owned high-performance computing (HPC) cluster connected to SCINet and located in Ames, IA. The original cluster build included 72 regular compute nodes, 5 high memory nodes, and a two Petabyte file system for a range of scientific applications. The cluster has been extended multiple times and currently has 196 regular compute nodes and 26 high-memory nodes. A small subset of nodes, called “priority” nodes, has been funded by Research Units. “Priority” nodes are available to all ARS users when not in use by their funders. The ['Technical Overview' in the Ceres User Manual]({{ site.baseurl }}/guides/resources/ceres#technical-overview) describes the number of logical cores and available memory for each type of node.

All nodes run on Linux Centos and compute jobs are managed by the SLURM scheduler. The system is configured with a login node, which users access to submit compute jobs to the SLURM scheduler.

Expand Down

0 comments on commit 5de1675

Please sign in to comment.