Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename references to master branch #243

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion applications/plexos-hpc-walkthrough/RunFiles/enhanced
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ module purge
module load centos mono/$MONO_VERSION xpressmp/$XPRESSMP_VERSION plexos/$PLEXOS_VERSION

# Get our data
wget https://github.nrel.gov/tkaiser2/plexos/raw/master/week.tgz
wget https://github.nrel.gov/tkaiser2/plexos/raw/code-examples/week.tgz
tar -xzf week.tgz

# What we have
Expand Down
2 changes: 1 addition & 1 deletion applications/plexos-hpc-walkthrough/RunFiles/simple
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ module purge
module load centos mono/$MONO_VERSION xpressmp/$XPRESSMP_VERSION plexos/$PLEXOS_VERSION

# Get our data
wget https://github.nrel.gov/tkaiser2/plexos/raw/master/week.tgz
wget https://github.nrel.gov/tkaiser2/plexos/raw/code-examples/week.tgz
tar -xzf week.tgz
ls -lt

Expand Down
10 changes: 5 additions & 5 deletions applications/plexos-quick-start/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,12 +74,12 @@ or just copy them from this page
or

```bash
wget https://github.nrel.gov/raw/tkaiser2/plexos/master/scripts/enhanced
wget https://github.nrel.gov/raw/tkaiser2/plexos/code-examples/scripts/enhanced
```

and
```bash
wget https://github.nrel.gov/raw/tkaiser2/plexos/master/scripts/simple
wget https://github.nrel.gov/raw/tkaiser2/plexos/code-examples/scripts/simple
```

## To run:
Expand All @@ -88,7 +88,7 @@ If you have never run Plexos on Eagle you will need to set up the license.
There is a script to do that. Download it and run it.

```bash
wget https://github.nrel.gov/raw/tkaiser2/plexos/master/scripts/makelicense
wget https://github.nrel.gov/raw/tkaiser2/plexos/code-examples/scripts/makelicense
chmod 700 makelicense
./makelicense
```
Expand Down Expand Up @@ -146,7 +146,7 @@ module purge
module load centos mono/$MONO_VERSION xpressmp/$XPRESSMP_VERSION plexos/$PLEXOS_VERSION

# Get our data
wget https://github.nrel.gov/tkaiser2/plexos/raw/master/week.tgz
wget https://github.nrel.gov/tkaiser2/plexos/raw/code-examples/week.tgz
tar -xzf week.tgz
ls -lt

Expand Down Expand Up @@ -209,7 +209,7 @@ module purge
module load centos mono/$MONO_VERSION xpressmp/$XPRESSMP_VERSION plexos/$PLEXOS_VERSION

# Get our data
wget https://github.nrel.gov/tkaiser2/plexos/raw/master/week.tgz
wget https://github.nrel.gov/tkaiser2/plexos/raw/code-examples/week.tgz
tar -xzf week.tgz

# What we have
Expand Down
2 changes: 1 addition & 1 deletion applications/spark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ CPUs and are not bottle-necked by the storage.

Here is an example of how to run `htop` on multiple nodes simulataneously with `tmux`.

Download this script: https://raw.githubusercontent.com/johnko/ssh-multi/master/bin/ssh-multi
Download this script: https://raw.githubusercontent.com/johnko/ssh-multi/code-examples/bin/ssh-multi

Run it like this:
```
Expand Down
32 changes: 16 additions & 16 deletions applications/vasp/Performance Study 2/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
A study was performed to evaluate the performance on VASP on Swift and Eagle using [ESIF VASP Benchmarks](https://github.com/NREL/ESIFHPC3/tree/master/VASP) 1 and 2. Benchmark 1 is a system of 16 atoms (Cu<sub>4</sub>In<sub>4</sub>Se<sub>8</sub>), and Benchmark 2 is a system of 519 atoms (Ag<sub>504</sub>C<sub>4</sub>H<sub>10</sub>S<sub>1</sub>).
A study was performed to evaluate the performance on VASP on Swift and Eagle using [ESIF VASP Benchmarks](https://github.com/NREL/ESIFHPC3/tree/code-examples/VASP) 1 and 2. Benchmark 1 is a system of 16 atoms (Cu<sub>4</sub>In<sub>4</sub>Se<sub>8</sub>), and Benchmark 2 is a system of 519 atoms (Ag<sub>504</sub>C<sub>4</sub>H<sub>10</sub>S<sub>1</sub>).

On Swift, the default builds of VASP installed on the system as modules were used. The Intel MPI build was built with Intel compilers and the mkl math library, and it was accessed via the "vaspintel" module. The OpenMPI build was compiled with gnu using gcc and fortran compilers and used OpenMPI's math libraries, and was accessed via the "vasp" module. Both builds run VASP 6.1.1.

Expand All @@ -22,9 +22,9 @@ Running the OpenACC GPU build of VASP (vasp_gpu) on GPU nodes improves performan

* Memory limitation: GPU nodes on Eagle cannot provide as much memory as CPU nodes for VASP jobs, and large VASP jobs may require more GPU nodes to provide enough memory for the calculation. For Benchmark 2, at least 2 full nodes were needed to provide enough memory to complete a calculation. Using more complicated parallelization schemes, the number of nodes necessary to provide enough memory scaled with the increase in number of problems handled simultaneousely.

![Eagle GPU Bench 2](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Eagle_GPU_2.png)
![Eagle GPU Bench 2](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Eagle_GPU_2.png)

![Eagle GPU Bench 1 4x4x2](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Eagle_GPU_1_4x4x2.png)
![Eagle GPU Bench 1 4x4x2](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Eagle_GPU_1_4x4x2.png)

### MPI

Expand All @@ -33,7 +33,7 @@ Intel MPI is recommended over Open MPI. Using an Intel MPI build of VASP and run
Find scripts for running the Intel MPI and Open MPI builds of VASP in [this section](#Scripts-for-Running-VASP-on-Eagle).

### --cpu-bind Flag
The --cpu-bind flag changes how tasks are assigned to cores throughout the node. Setting --cpu-bind=cores or rank showed no improvement in the performance of VASP on 36 CPUs/node. When running on 18 CPUs/node, setting --cpu-bind=cores shows a small improvement in runtime (~5% decrease) using both Intel MPI and Open MPI. (See [cpu-bind analysis](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/cpu-bind%20data/cpu-bind_VASP.ipynb) for info on the effect of cpu-bind)
The --cpu-bind flag changes how tasks are assigned to cores throughout the node. Setting --cpu-bind=cores or rank showed no improvement in the performance of VASP on 36 CPUs/node. When running on 18 CPUs/node, setting --cpu-bind=cores shows a small improvement in runtime (~5% decrease) using both Intel MPI and Open MPI. (See [cpu-bind analysis](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/cpu-bind%20data/cpu-bind_VASP.ipynb) for info on the effect of cpu-bind)

cpu-bind can be set as a flag in an srun command, such as
```
Expand All @@ -52,9 +52,9 @@ KPAR determines the number of groups across which to divide calculations at each
Runtime does not scale well with the number of kpoints. Benchmark 1 uses a 10x10x5 kpoints grid (500 kpoints). When run with a 4x4x2 kpoints grid (16 kpoints), we should expect the runtime to scale by 16/500 (3.2%) since calculations are being performed at 16 points rather than 500. However, the average scaling factor between Benchmark 1 jobs on Eagle with 10x10x5 grids and 4x4x2 grids is 28% (ranging from ~20%-57%).

### Scripts for Running VASP on Eagle
* [VASP on Eagle with Intel MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_IntelMPI.slurm)
* [VASP on Eagle with Open MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenMPI.slurm)
* [VASP on Eagle on GPUs with OpenACC GPU build using Intel MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenACC_GPU.slurm)
* [VASP on Eagle with Intel MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_IntelMPI.slurm)
* [VASP on Eagle with Open MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenMPI.slurm)
* [VASP on Eagle on GPUs with OpenACC GPU build using Intel MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenACC_GPU.slurm)

## Swift

Expand All @@ -70,11 +70,11 @@ The graphs below are meant to help users identify the number of CPUs/node that w

Intel MPI, performance/core | Intel MPI, performance/node
:-------------------------:|:-------------------------:
![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_2_Intel_Cores.png) | ![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_2_Intel_Nodes.png)
![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_2_Intel_Cores.png) | ![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_2_Intel_Nodes.png)

Open MPI, performance/core | Open MPI, performance/node
:-------------------------:|:-------------------------:
![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_2_Open_Cores.png) | ![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_2_Open_Nodes.png)
![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_2_Open_Cores.png) | ![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_2_Open_Nodes.png)

### MPI

Expand All @@ -84,7 +84,7 @@ Find scripts for running the Intel MPI and Open MPI builds of VASP in [this sect

### --cpu-bind Flag

The --cpu-bind flag changes how tasks are assigned to cores throughout the node. On Swift, it is recommended not to use cpu-bind. Running VASP on 64 CPUs/node and 128 CPUs/node, setting --cpu-bind=cores or rank showed no improvement in runtime. Running VASP on 32 CPUs/node, setting --cpu-bind=cores or rank increased runtime by up to 40%. (See [cpu-bind analysis](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/cpu-bind%20data/cpu-bind_VASP.ipynb) for info on the effect of cpu-bind)
The --cpu-bind flag changes how tasks are assigned to cores throughout the node. On Swift, it is recommended not to use cpu-bind. Running VASP on 64 CPUs/node and 128 CPUs/node, setting --cpu-bind=cores or rank showed no improvement in runtime. Running VASP on 32 CPUs/node, setting --cpu-bind=cores or rank increased runtime by up to 40%. (See [cpu-bind analysis](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/cpu-bind%20data/cpu-bind_VASP.ipynb) for info on the effect of cpu-bind)

```
srun --cpu-bind=cores vasp_std
Expand All @@ -102,17 +102,17 @@ KPAR determines the number of groups across which to divide calculations at each

KPAR = 1 | KPAR = 4
:-------------------------:|:-------------------------:
![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_1_K1_N4.png) | ![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_1_K4_N4.png)
![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_1_K1_N4.png) | ![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_1_K4_N4.png)
KPAR = 8 | KPAR = 9
![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_1_K8_N4.png) | ![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_1_K9_N4.png)
![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_1_K8_N4.png) | ![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_1_K9_N4.png)


### K-Points Scaling

Runtime does not scale well with the number of kpoints. Benchmark 1 uses a 10x10x5 kpoints grid (500 kpoints). When run with a 4x4x2 kpoints grid (16 kpoints), we should expect the runtime to scale by 16/500 (3.2%) since calculations are being performed at 16 points rather than 500. However, the average scaling factor between Benchmark 1 jobs on Swift with 10x10x5 grids and 4x4x2 grids is 28% (ranging from ~19%-39%).

### Scripts for Running VASP on Swift
* [VASP on Swift with Intel MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_IntelMPI.slurm)
* [VASP on Swift with Open MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_OpenMPI.slurm)
* [VASP on Swift with Shared Nodes using Intel MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_IntelMPI_shared_nodes.slurm)
* [VASP on Swift with Shared Nodes using Open MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_OpenMPI_shared_nodes.slurm)
* [VASP on Swift with Intel MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_IntelMPI.slurm)
* [VASP on Swift with Open MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_OpenMPI.slurm)
* [VASP on Swift with Shared Nodes using Intel MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_IntelMPI_shared_nodes.slurm)
* [VASP on Swift with Shared Nodes using Open MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_OpenMPI_shared_nodes.slurm)
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"source": [
"# VASP Recommendations Analysis\n",
"\n",
"In this study, the ESIF VASP Benchmarks (https://github.com/NREL/ESIFHPC3/tree/master/VASP) 1 and 2 were used. Benchmark 1 is a system of 16 atoms (Cu<sub>4</sub>In<sub>4</sub>Se<sub>8</sub>), and Benchmark 2 is a system of 519 atoms (Ag<sub>504</sub>C<sub>4</sub>H<sub>10</sub>S<sub>1</sub>). \n",
"In this study, the ESIF VASP Benchmarks (https://github.com/NREL/ESIFHPC3/tree/code-examples/VASP) 1 and 2 were used. Benchmark 1 is a system of 16 atoms (Cu<sub>4</sub>In<sub>4</sub>Se<sub>8</sub>), and Benchmark 2 is a system of 519 atoms (Ag<sub>504</sub>C<sub>4</sub>H<sub>10</sub>S<sub>1</sub>). \n",
"\n",
"On Swift, the default builds of VASP installed on the system as modules were used. The Intel MPI build was built with Intel compilers and the mkl math library, and it was accessed via the \"vaspintel\" module. The OpenMPI build was compiled with gnu using gcc and fortran compilers and used OpenMPI's math libraries, and was accessed via the \"vasp\" module. Both builds run VASP 6.1.1. \n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In this study, the ESIF VASP Benchmark 2 (https://github.com/NREL/ESIFHPC3/tree/master/VASP/bench2) was used to study how the cpu-bind flag affects the way that tasks are assigned to cores throughout the node over the runtime of a VASP job on Swfit and Eagle. Benchmark 1 is a system of 16 atoms (Cu4In4Se8), and Benchmark 2 is a system of 519 atoms (Ag504C4H10S1).\n",
"In this study, the ESIF VASP Benchmark 2 (https://github.com/NREL/ESIFHPC3/tree/code-examples/VASP/bench2) was used to study how the cpu-bind flag affects the way that tasks are assigned to cores throughout the node over the runtime of a VASP job on Swfit and Eagle. Benchmark 1 is a system of 16 atoms (Cu4In4Se8), and Benchmark 2 is a system of 519 atoms (Ag504C4H10S1).\n",
"\n",
"On Swift, the default builds of VASP installed on the system as modules were used. The Intel MPI build was built with Intel compilers and the mkl math library, and it was accessed via the \"vaspintel\" module. The OpenMPI build was compiled with gnu using gcc and fortran compilers and used OpenMPI's math libraries, and was accessed via the \"vasp\" module. Both builds run VASP 6.1.1.\n",
"\n",
"On Eagle, the default build of VASP installed on the system is an Intel MPI version of VASP. The Intel MPI build was built with Intel compilers and the mkl math library, and it was accessed via the \"vasp\" module. It runs VASP 6.1.2. No Open MPI VASP build is accessible through the default modules on Eagle, but an Open MPI build can be accessed in an environment via \"source /nopt/nrel/apps/210830a/myenv.2108301742, ml vasp/6.1.1-l2mkbb2\". The OpenMPI build was compiled with gnu using gcc and fortran compilers and used OpenMPI's math libraries. It runs VASP 6.1.1.\n",
"\n",
"The VASP repo (https://github.com/claralarson/HPC/tree/master/applications/vasp/VASP%20Recommendations) contains scripts that can be used to run the Intel MPI and Open MPI builds used in the study to perform calculations on Swift and Eagle.\n",
"The VASP repo (https://github.com/claralarson/HPC/tree/code-examples/applications/vasp/VASP%20Recommendations) contains scripts that can be used to run the Intel MPI and Open MPI builds used in the study to perform calculations on Swift and Eagle.\n",
"\n",
"The cpu-bind flag can be set in the srun command as follows:\n",
"> srun --cpu-bind=cores vasp_std\n",
Expand Down
Loading