diff --git a/applications/plexos-hpc-walkthrough/RunFiles/enhanced b/applications/plexos-hpc-walkthrough/RunFiles/enhanced index 373125daf..42595ade4 100644 --- a/applications/plexos-hpc-walkthrough/RunFiles/enhanced +++ b/applications/plexos-hpc-walkthrough/RunFiles/enhanced @@ -42,7 +42,7 @@ module purge module load centos mono/$MONO_VERSION xpressmp/$XPRESSMP_VERSION plexos/$PLEXOS_VERSION # Get our data -wget https://github.nrel.gov/tkaiser2/plexos/raw/master/week.tgz +wget https://github.nrel.gov/tkaiser2/plexos/raw/code-examples/week.tgz tar -xzf week.tgz # What we have diff --git a/applications/plexos-hpc-walkthrough/RunFiles/simple b/applications/plexos-hpc-walkthrough/RunFiles/simple index e0bfbdec0..e6d123350 100644 --- a/applications/plexos-hpc-walkthrough/RunFiles/simple +++ b/applications/plexos-hpc-walkthrough/RunFiles/simple @@ -27,7 +27,7 @@ module purge module load centos mono/$MONO_VERSION xpressmp/$XPRESSMP_VERSION plexos/$PLEXOS_VERSION # Get our data -wget https://github.nrel.gov/tkaiser2/plexos/raw/master/week.tgz +wget https://github.nrel.gov/tkaiser2/plexos/raw/code-examples/week.tgz tar -xzf week.tgz ls -lt diff --git a/applications/plexos-quick-start/README.md b/applications/plexos-quick-start/README.md index 9e2f6e969..b2a463357 100644 --- a/applications/plexos-quick-start/README.md +++ b/applications/plexos-quick-start/README.md @@ -74,12 +74,12 @@ or just copy them from this page or ```bash -wget https://github.nrel.gov/raw/tkaiser2/plexos/master/scripts/enhanced +wget https://github.nrel.gov/raw/tkaiser2/plexos/code-examples/scripts/enhanced ``` and ```bash -wget https://github.nrel.gov/raw/tkaiser2/plexos/master/scripts/simple +wget https://github.nrel.gov/raw/tkaiser2/plexos/code-examples/scripts/simple ``` ## To run: @@ -88,7 +88,7 @@ If you have never run Plexos on Eagle you will need to set up the license. There is a script to do that. Download it and run it. ```bash -wget https://github.nrel.gov/raw/tkaiser2/plexos/master/scripts/makelicense +wget https://github.nrel.gov/raw/tkaiser2/plexos/code-examples/scripts/makelicense chmod 700 makelicense ./makelicense ``` @@ -146,7 +146,7 @@ module purge module load centos mono/$MONO_VERSION xpressmp/$XPRESSMP_VERSION plexos/$PLEXOS_VERSION # Get our data -wget https://github.nrel.gov/tkaiser2/plexos/raw/master/week.tgz +wget https://github.nrel.gov/tkaiser2/plexos/raw/code-examples/week.tgz tar -xzf week.tgz ls -lt @@ -209,7 +209,7 @@ module purge module load centos mono/$MONO_VERSION xpressmp/$XPRESSMP_VERSION plexos/$PLEXOS_VERSION # Get our data -wget https://github.nrel.gov/tkaiser2/plexos/raw/master/week.tgz +wget https://github.nrel.gov/tkaiser2/plexos/raw/code-examples/week.tgz tar -xzf week.tgz # What we have diff --git a/applications/spark/README.md b/applications/spark/README.md index d0aa36d4f..8ece7026e 100644 --- a/applications/spark/README.md +++ b/applications/spark/README.md @@ -190,7 +190,7 @@ CPUs and are not bottle-necked by the storage. Here is an example of how to run `htop` on multiple nodes simulataneously with `tmux`. -Download this script: https://raw.githubusercontent.com/johnko/ssh-multi/master/bin/ssh-multi +Download this script: https://raw.githubusercontent.com/johnko/ssh-multi/code-examples/bin/ssh-multi Run it like this: ``` diff --git a/applications/vasp/Performance Study 2/README.md b/applications/vasp/Performance Study 2/README.md index 853dd8a29..faf3a60ea 100644 --- a/applications/vasp/Performance Study 2/README.md +++ b/applications/vasp/Performance Study 2/README.md @@ -1,4 +1,4 @@ -A study was performed to evaluate the performance on VASP on Swift and Eagle using [ESIF VASP Benchmarks](https://github.com/NREL/ESIFHPC3/tree/master/VASP) 1 and 2. Benchmark 1 is a system of 16 atoms (Cu4In4Se8), and Benchmark 2 is a system of 519 atoms (Ag504C4H10S1). +A study was performed to evaluate the performance on VASP on Swift and Eagle using [ESIF VASP Benchmarks](https://github.com/NREL/ESIFHPC3/tree/code-examples/VASP) 1 and 2. Benchmark 1 is a system of 16 atoms (Cu4In4Se8), and Benchmark 2 is a system of 519 atoms (Ag504C4H10S1). On Swift, the default builds of VASP installed on the system as modules were used. The Intel MPI build was built with Intel compilers and the mkl math library, and it was accessed via the "vaspintel" module. The OpenMPI build was compiled with gnu using gcc and fortran compilers and used OpenMPI's math libraries, and was accessed via the "vasp" module. Both builds run VASP 6.1.1. @@ -22,9 +22,9 @@ Running the OpenACC GPU build of VASP (vasp_gpu) on GPU nodes improves performan * Memory limitation: GPU nodes on Eagle cannot provide as much memory as CPU nodes for VASP jobs, and large VASP jobs may require more GPU nodes to provide enough memory for the calculation. For Benchmark 2, at least 2 full nodes were needed to provide enough memory to complete a calculation. Using more complicated parallelization schemes, the number of nodes necessary to provide enough memory scaled with the increase in number of problems handled simultaneousely. -![Eagle GPU Bench 2](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Eagle_GPU_2.png) +![Eagle GPU Bench 2](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Eagle_GPU_2.png) -![Eagle GPU Bench 1 4x4x2](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Eagle_GPU_1_4x4x2.png) +![Eagle GPU Bench 1 4x4x2](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Eagle_GPU_1_4x4x2.png) ### MPI @@ -33,7 +33,7 @@ Intel MPI is recommended over Open MPI. Using an Intel MPI build of VASP and run Find scripts for running the Intel MPI and Open MPI builds of VASP in [this section](#Scripts-for-Running-VASP-on-Eagle). ### --cpu-bind Flag -The --cpu-bind flag changes how tasks are assigned to cores throughout the node. Setting --cpu-bind=cores or rank showed no improvement in the performance of VASP on 36 CPUs/node. When running on 18 CPUs/node, setting --cpu-bind=cores shows a small improvement in runtime (~5% decrease) using both Intel MPI and Open MPI. (See [cpu-bind analysis](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/cpu-bind%20data/cpu-bind_VASP.ipynb) for info on the effect of cpu-bind) +The --cpu-bind flag changes how tasks are assigned to cores throughout the node. Setting --cpu-bind=cores or rank showed no improvement in the performance of VASP on 36 CPUs/node. When running on 18 CPUs/node, setting --cpu-bind=cores shows a small improvement in runtime (~5% decrease) using both Intel MPI and Open MPI. (See [cpu-bind analysis](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/cpu-bind%20data/cpu-bind_VASP.ipynb) for info on the effect of cpu-bind) cpu-bind can be set as a flag in an srun command, such as ``` @@ -52,9 +52,9 @@ KPAR determines the number of groups across which to divide calculations at each Runtime does not scale well with the number of kpoints. Benchmark 1 uses a 10x10x5 kpoints grid (500 kpoints). When run with a 4x4x2 kpoints grid (16 kpoints), we should expect the runtime to scale by 16/500 (3.2%) since calculations are being performed at 16 points rather than 500. However, the average scaling factor between Benchmark 1 jobs on Eagle with 10x10x5 grids and 4x4x2 grids is 28% (ranging from ~20%-57%). ### Scripts for Running VASP on Eagle - * [VASP on Eagle with Intel MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_IntelMPI.slurm) - * [VASP on Eagle with Open MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenMPI.slurm) - * [VASP on Eagle on GPUs with OpenACC GPU build using Intel MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenACC_GPU.slurm) + * [VASP on Eagle with Intel MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_IntelMPI.slurm) + * [VASP on Eagle with Open MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenMPI.slurm) + * [VASP on Eagle on GPUs with OpenACC GPU build using Intel MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenACC_GPU.slurm) ## Swift @@ -70,11 +70,11 @@ The graphs below are meant to help users identify the number of CPUs/node that w Intel MPI, performance/core | Intel MPI, performance/node :-------------------------:|:-------------------------: -![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_2_Intel_Cores.png) | ![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_2_Intel_Nodes.png) +![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_2_Intel_Cores.png) | ![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_2_Intel_Nodes.png) Open MPI, performance/core | Open MPI, performance/node :-------------------------:|:-------------------------: -![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_2_Open_Cores.png) | ![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_2_Open_Nodes.png) +![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_2_Open_Cores.png) | ![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_2_Open_Nodes.png) ### MPI @@ -84,7 +84,7 @@ Find scripts for running the Intel MPI and Open MPI builds of VASP in [this sect ### --cpu-bind Flag -The --cpu-bind flag changes how tasks are assigned to cores throughout the node. On Swift, it is recommended not to use cpu-bind. Running VASP on 64 CPUs/node and 128 CPUs/node, setting --cpu-bind=cores or rank showed no improvement in runtime. Running VASP on 32 CPUs/node, setting --cpu-bind=cores or rank increased runtime by up to 40%. (See [cpu-bind analysis](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/cpu-bind%20data/cpu-bind_VASP.ipynb) for info on the effect of cpu-bind) +The --cpu-bind flag changes how tasks are assigned to cores throughout the node. On Swift, it is recommended not to use cpu-bind. Running VASP on 64 CPUs/node and 128 CPUs/node, setting --cpu-bind=cores or rank showed no improvement in runtime. Running VASP on 32 CPUs/node, setting --cpu-bind=cores or rank increased runtime by up to 40%. (See [cpu-bind analysis](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/cpu-bind%20data/cpu-bind_VASP.ipynb) for info on the effect of cpu-bind) ``` srun --cpu-bind=cores vasp_std @@ -102,9 +102,9 @@ KPAR determines the number of groups across which to divide calculations at each KPAR = 1 | KPAR = 4 :-------------------------:|:-------------------------: -![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_1_K1_N4.png) | ![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_1_K4_N4.png) +![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_1_K1_N4.png) | ![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_1_K4_N4.png) KPAR = 8 | KPAR = 9 -![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_1_K8_N4.png) | ![](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/Images/Swift_1_K9_N4.png) +![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_1_K8_N4.png) | ![](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/Images/Swift_1_K9_N4.png) ### K-Points Scaling @@ -112,7 +112,7 @@ KPAR = 8 | KPAR = 9 Runtime does not scale well with the number of kpoints. Benchmark 1 uses a 10x10x5 kpoints grid (500 kpoints). When run with a 4x4x2 kpoints grid (16 kpoints), we should expect the runtime to scale by 16/500 (3.2%) since calculations are being performed at 16 points rather than 500. However, the average scaling factor between Benchmark 1 jobs on Swift with 10x10x5 grids and 4x4x2 grids is 28% (ranging from ~19%-39%). ### Scripts for Running VASP on Swift - * [VASP on Swift with Intel MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_IntelMPI.slurm) - * [VASP on Swift with Open MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_OpenMPI.slurm) - * [VASP on Swift with Shared Nodes using Intel MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_IntelMPI_shared_nodes.slurm) - * [VASP on Swift with Shared Nodes using Open MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_OpenMPI_shared_nodes.slurm) + * [VASP on Swift with Intel MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_IntelMPI.slurm) + * [VASP on Swift with Open MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_OpenMPI.slurm) + * [VASP on Swift with Shared Nodes using Intel MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_IntelMPI_shared_nodes.slurm) + * [VASP on Swift with Shared Nodes using Open MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_OpenMPI_shared_nodes.slurm) diff --git a/applications/vasp/Performance Study 2/VASP Performance Analysis/VASP_Recommendations_Analysis.ipynb b/applications/vasp/Performance Study 2/VASP Performance Analysis/VASP_Recommendations_Analysis.ipynb index f54da3d90..160153856 100644 --- a/applications/vasp/Performance Study 2/VASP Performance Analysis/VASP_Recommendations_Analysis.ipynb +++ b/applications/vasp/Performance Study 2/VASP Performance Analysis/VASP_Recommendations_Analysis.ipynb @@ -6,7 +6,7 @@ "source": [ "# VASP Recommendations Analysis\n", "\n", - "In this study, the ESIF VASP Benchmarks (https://github.com/NREL/ESIFHPC3/tree/master/VASP) 1 and 2 were used. Benchmark 1 is a system of 16 atoms (Cu4In4Se8), and Benchmark 2 is a system of 519 atoms (Ag504C4H10S1). \n", + "In this study, the ESIF VASP Benchmarks (https://github.com/NREL/ESIFHPC3/tree/code-examples/VASP) 1 and 2 were used. Benchmark 1 is a system of 16 atoms (Cu4In4Se8), and Benchmark 2 is a system of 519 atoms (Ag504C4H10S1). \n", "\n", "On Swift, the default builds of VASP installed on the system as modules were used. The Intel MPI build was built with Intel compilers and the mkl math library, and it was accessed via the \"vaspintel\" module. The OpenMPI build was compiled with gnu using gcc and fortran compilers and used OpenMPI's math libraries, and was accessed via the \"vasp\" module. Both builds run VASP 6.1.1. \n", "\n", diff --git a/applications/vasp/Performance Study 2/cpu-bind data/cpu-bind_VASP.ipynb b/applications/vasp/Performance Study 2/cpu-bind data/cpu-bind_VASP.ipynb index c01fa11b3..632318de1 100644 --- a/applications/vasp/Performance Study 2/cpu-bind data/cpu-bind_VASP.ipynb +++ b/applications/vasp/Performance Study 2/cpu-bind data/cpu-bind_VASP.ipynb @@ -4,13 +4,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In this study, the ESIF VASP Benchmark 2 (https://github.com/NREL/ESIFHPC3/tree/master/VASP/bench2) was used to study how the cpu-bind flag affects the way that tasks are assigned to cores throughout the node over the runtime of a VASP job on Swfit and Eagle. Benchmark 1 is a system of 16 atoms (Cu4In4Se8), and Benchmark 2 is a system of 519 atoms (Ag504C4H10S1).\n", + "In this study, the ESIF VASP Benchmark 2 (https://github.com/NREL/ESIFHPC3/tree/code-examples/VASP/bench2) was used to study how the cpu-bind flag affects the way that tasks are assigned to cores throughout the node over the runtime of a VASP job on Swfit and Eagle. Benchmark 1 is a system of 16 atoms (Cu4In4Se8), and Benchmark 2 is a system of 519 atoms (Ag504C4H10S1).\n", "\n", "On Swift, the default builds of VASP installed on the system as modules were used. The Intel MPI build was built with Intel compilers and the mkl math library, and it was accessed via the \"vaspintel\" module. The OpenMPI build was compiled with gnu using gcc and fortran compilers and used OpenMPI's math libraries, and was accessed via the \"vasp\" module. Both builds run VASP 6.1.1.\n", "\n", "On Eagle, the default build of VASP installed on the system is an Intel MPI version of VASP. The Intel MPI build was built with Intel compilers and the mkl math library, and it was accessed via the \"vasp\" module. It runs VASP 6.1.2. No Open MPI VASP build is accessible through the default modules on Eagle, but an Open MPI build can be accessed in an environment via \"source /nopt/nrel/apps/210830a/myenv.2108301742, ml vasp/6.1.1-l2mkbb2\". The OpenMPI build was compiled with gnu using gcc and fortran compilers and used OpenMPI's math libraries. It runs VASP 6.1.1.\n", "\n", - "The VASP repo (https://github.com/claralarson/HPC/tree/master/applications/vasp/VASP%20Recommendations) contains scripts that can be used to run the Intel MPI and Open MPI builds used in the study to perform calculations on Swift and Eagle.\n", + "The VASP repo (https://github.com/claralarson/HPC/tree/code-examples/applications/vasp/VASP%20Recommendations) contains scripts that can be used to run the Intel MPI and Open MPI builds used in the study to perform calculations on Swift and Eagle.\n", "\n", "The cpu-bind flag can be set in the srun command as follows:\n", "> srun --cpu-bind=cores vasp_std\n", diff --git a/applications/vasp/README.md b/applications/vasp/README.md index df98b53e6..f1eace160 100644 --- a/applications/vasp/README.md +++ b/applications/vasp/README.md @@ -13,14 +13,14 @@ Load VASP with Intel MPI: ``` ml vasp ``` -[script to run VASP on Eagle with Intel MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_IntelMPI.slurm) +[script to run VASP on Eagle with Intel MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_IntelMPI.slurm) Load VASP with Open MPI: ``` source /nopt/nrel/apps/210830a/myenv.2108301742 ml vasp/6.1.1-l2mkbb2 ``` -[script to run VASP on Eagle with Open MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenMPI.slurm) +[script to run VASP on Eagle with Open MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenMPI.slurm) Load the GPU build of VASP: ``` @@ -30,7 +30,7 @@ export LD_LIBRARY_PATH=/nopt/nrel/apps/220511a/install/opt/spack/linux-centos7-s export LD_LIBRARY_PATH=/nopt/nrel/apps/220511a/install/opt/spack/linux-centos7-skylake_avx512/gcc-12.1.0/nvhpc-22.3-c4qk6fly5hls3mjimoxg6vyuy5cc3vti/Linux_x86_64/22.3/compilers/extras/qd/lib:$LD_LIBRARY_PATH export PATH=/projects/hpcapps/tkaiser2/vasp/6.3.1/nvhpc_acc:$PATH ``` -[script to run VASP on Eagle on GPU nodes](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenACC_GPU.slurm) +[script to run VASP on Eagle on GPU nodes](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Eagle_OpenACC_GPU.slurm) ### On Swift Load VASP with Intel MPI: @@ -42,7 +42,7 @@ ml intel-oneapi-compilers/2021.3.0-piz2usr ml intel-oneapi-mpi/2021.3.0-hcp2lkf ml intel-oneapi-mkl/2021.3.0-giz47h4 ``` -[script to run VASP on Swift with Intel MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_IntelMPI.slurm) +[script to run VASP on Swift with Intel MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_IntelMPI.slurm) Load VASP with Open MPI: ``` @@ -50,27 +50,27 @@ ml vasp ml slurm/21-08-1-1-o2xw5ti ml openmpi/4.1.1-6vr2flz ``` -[script to run VASP on Swift with Open MPI](https://github.com/NREL/HPC/blob/master/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_OpenMPI.slurm) +[script to run VASP on Swift with Open MPI](https://github.com/NREL/HPC/blob/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts/Swift_OpenMPI.slurm) ## VASP Documentation -This repo contains the results of two separate VASP performance studies. The first, Performance Study 1, studies VASP performance on Eagle using the input files provided in the directory. The second, Performance Study 2, studies VASP performance on Eagle and Swift using benchmarks from the ESIF benchmarking suite, which can be found [here](https://github.com/NREL/ESIFHPC3/tree/master/VASP) or in the benchmarks folder in the Performance Harness 2 directory. Each study evaluates performance differently, as described below, and provides recommendations for running VASP most efficiently in the README files. The READMEs in each directory contain the following information. +This repo contains the results of two separate VASP performance studies. The first, Performance Study 1, studies VASP performance on Eagle using the input files provided in the directory. The second, Performance Study 2, studies VASP performance on Eagle and Swift using benchmarks from the ESIF benchmarking suite, which can be found [here](https://github.com/NREL/ESIFHPC3/tree/code-examples/VASP) or in the benchmarks folder in the Performance Harness 2 directory. Each study evaluates performance differently, as described below, and provides recommendations for running VASP most efficiently in the README files. The READMEs in each directory contain the following information. -[Performance Study 1](https://github.com/NREL/HPC/tree/master/applications/vasp/Performance%20Study%201) (VASP6 on Eagle): +[Performance Study 1](https://github.com/NREL/HPC/tree/code-examples/applications/vasp/Performance%20Study%201) (VASP6 on Eagle): - Recommendations for setting LREAL - Recommendations for setting cpu pinning - Recommendations for setting NPAR - Recommendations for setting NSIM - Instructions for using the OpenMP version of VASP -- Instructions for running multiple VASP jobs on the same nodes (and [scripts to do so](https://github.com/NREL/HPC/tree/master/applications/vasp/Performance%20Study%201/multi)) +- Instructions for running multiple VASP jobs on the same nodes (and [scripts to do so](https://github.com/NREL/HPC/tree/code-examples/applications/vasp/Performance%20Study%201/multi)) - Runtime comparison using VASP5 -[Performance Study 2](https://github.com/NREL/HPC/tree/master/applications/vasp/Performance%20Study%202#https://github.com/NREL/HPC/tree/master/applications/vasp/Performance%20Study%202) (VASP6 on Eagle and Swift): +[Performance Study 2](https://github.com/NREL/HPC/tree/code-examples/applications/vasp/Performance%20Study%202#https://github.com/NREL/HPC/tree/code-examples/applications/vasp/Performance%20Study%202) (VASP6 on Eagle and Swift): - Information on how runtime scales with nodecount - Recommendations for chosing the most efficient value of cpus/node -- Recommendations for running VASP on Eagle's GPU nodes (and [scripts to do so](https://github.com/NREL/HPC/tree/master/applications/vasp/Performance%20Study%202/VASP%20scripts)) -- Recommendations for chosing Intel MPI or Open MPI (and [scripts for running with both MPIs](https://github.com/NREL/HPC/tree/master/applications/vasp/Performance%20Study%202/VASP%20scripts)) +- Recommendations for running VASP on Eagle's GPU nodes (and [scripts to do so](https://github.com/NREL/HPC/tree/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts)) +- Recommendations for chosing Intel MPI or Open MPI (and [scripts for running with both MPIs](https://github.com/NREL/HPC/tree/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts)) - Recommendations for setting KPAR - Recommendations for setting cpu pinning - Information on k-points scaling -- Instructions for running multiple VASP jobs on the same nodes on Swift (and [scripts to do so](https://github.com/NREL/HPC/tree/master/applications/vasp/Performance%20Study%202/VASP%20scripts)) +- Instructions for running multiple VASP jobs on the same nodes on Swift (and [scripts to do so](https://github.com/NREL/HPC/tree/code-examples/applications/vasp/Performance%20Study%202/VASP%20scripts)) diff --git a/general/Jupyterhub/adv_jupyter/README.md b/general/Jupyterhub/adv_jupyter/README.md index b73315474..49e33ed8b 100644 --- a/general/Jupyterhub/adv_jupyter/README.md +++ b/general/Jupyterhub/adv_jupyter/README.md @@ -11,7 +11,7 @@ Beyond the basics: this advanced Jupyter directory builds upon our Intro to Jupy * Slurm commands: `srun` from a notebook, job status checks, running MPI-enabled routines. * Explain `pip install slurm_magic` from inside notebook - * See https://github.com/NREL/HPC/blob/master/general/Jupyterhub/jupyter/dompi.ipynb + * See https://github.com/NREL/HPC/blob/code-examples/general/Jupyterhub/jupyter/dompi.ipynb * Demonstration of using slurm magics to run MNIST * multi-node parallelism with mpi4py diff --git a/general/Jupyterhub/adv_jupyter/mpi4py_tf/dompi.ipynb b/general/Jupyterhub/adv_jupyter/mpi4py_tf/dompi.ipynb index f5fc21f29..a65da9722 100644 --- a/general/Jupyterhub/adv_jupyter/mpi4py_tf/dompi.ipynb +++ b/general/Jupyterhub/adv_jupyter/mpi4py_tf/dompi.ipynb @@ -25,7 +25,7 @@ "\n", "Here is the source:\n", "\n", - "https://github.com/NERSC/slurm-magic/blob/master/slurm_magic.py\n", + "https://github.com/NERSC/slurm-magic/blob/code-examples/slurm_magic.py\n", "\n", "\n", "\n" @@ -327,8 +327,8 @@ "\r\n", "\r\n", "#add Tim's thread mapping module\r\n", - "wget https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/setup.py\r\n", - "wget https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/spam.c\r\n", + "wget https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/setup.py\r\n", + "wget https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/spam.c\r\n", "python3 setup.py install\r\n", "\r\n" ] @@ -413,7 +413,7 @@ "### To get tunnel\n", "\n", "`\n", - "wget https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/tunnel.sh\n", + "wget https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/tunnel.sh\n", "`\n", "\n", "### We're going to get a few examples to play with:" @@ -429,7 +429,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "--2021-05-12 13:55:03-- https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/c_ex02.c\n", + "--2021-05-12 13:55:03-- https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/c_ex02.c\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.110.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", @@ -444,7 +444,7 @@ } ], "source": [ - "!wget https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/c_ex02.c" + "!wget https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/c_ex02.c" ] }, { @@ -457,7 +457,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "--2021-05-12 13:55:03-- https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/report.py\n", + "--2021-05-12 13:55:03-- https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/report.py\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.110.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", @@ -472,7 +472,7 @@ } ], "source": [ - "!wget https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/report.py" + "!wget https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/report.py" ] }, { diff --git a/general/Jupyterhub/adv_jupyter/mpi4py_tf/makeit b/general/Jupyterhub/adv_jupyter/mpi4py_tf/makeit index 521c74f75..4d0c5396e 100644 --- a/general/Jupyterhub/adv_jupyter/mpi4py_tf/makeit +++ b/general/Jupyterhub/adv_jupyter/mpi4py_tf/makeit @@ -60,7 +60,7 @@ pip --no-cache-dir install cupy #add Tim's thread mapping module -wget https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/setup.py -wget https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/spam.c +wget https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/setup.py +wget https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/spam.c python3 setup.py install diff --git a/general/Jupyterhub/adv_jupyter/mpi4py_tf/mninstcu.ipynb b/general/Jupyterhub/adv_jupyter/mpi4py_tf/mninstcu.ipynb index 6fb9f5bb7..351338520 100644 --- a/general/Jupyterhub/adv_jupyter/mpi4py_tf/mninstcu.ipynb +++ b/general/Jupyterhub/adv_jupyter/mpi4py_tf/mninstcu.ipynb @@ -33,7 +33,7 @@ "Here is the source:\n", "`\n", "\n", - "https://github.com/NERSC/slurm-magic/blob/master/slurm_magic.py\n", + "https://github.com/NERSC/slurm-magic/blob/code-examples/slurm_magic.py\n", "\n", "\n", "\n" @@ -1908,8 +1908,8 @@ "\r\n", "\r\n", "#add Tim's thread mapping module\r\n", - "wget https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/setup.py\r\n", - "wget https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/spam.c\r\n", + "wget https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/setup.py\r\n", + "wget https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/spam.c\r\n", "python3 setup.py install\r\n", "\r\n" ] diff --git a/general/Jupyterhub/adv_jupyter/mpi4py_tf/spackBuild b/general/Jupyterhub/adv_jupyter/mpi4py_tf/spackBuild index 27fe2b4b4..8378d99f0 100755 --- a/general/Jupyterhub/adv_jupyter/mpi4py_tf/spackBuild +++ b/general/Jupyterhub/adv_jupyter/mpi4py_tf/spackBuild @@ -7,7 +7,7 @@ # Make a python/jupyter/mpi4py/pandas/tensorflow/cupy environment using spack. # We also install a bare version of R with Rmpi. The R and Python versions of -# MPI should work together. See: https://github.com/timkphd/examples/tree/master/mpi/mixedlang +# MPI should work together. See: https://github.com/timkphd/examples/tree/code-examples/mpi/mixedlang # for examples # ********** Install directory ********** @@ -37,7 +37,7 @@ cd $IDIR #If you don't have tymer use this poor man's version command -v tymer >/dev/null 2>&1 || alias tymer='python -c "import sys ;import time ;print(time.time(),time.asctime(),sys.argv[1:])" ' #You can get the full version from -#https://raw.githubusercontent.com/timkphd/examples/master/tims_tools/tymer +#https://raw.githubusercontent.com/timkphd/examples/code-examples/tims_tools/tymer # This is where tymer will put its data so we clean it out rm ~/sbuild @@ -219,8 +219,8 @@ tymer ~/sbuild done cupy #Add Tim's thread mapping module -wget https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/setup.py -wget https://raw.githubusercontent.com/NREL/HPC/master/slurm/source/spam.c +wget https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/setup.py +wget https://raw.githubusercontent.com/NREL/HPC/code-examples/slurm/source/spam.c python3 setup.py install tymer ~/sbuild done spam diff --git a/general/bash/cheatsheet.sh b/general/bash/cheatsheet.sh index 7d5aa7092..473bf28f1 100644 --- a/general/bash/cheatsheet.sh +++ b/general/bash/cheatsheet.sh @@ -1,6 +1,6 @@ #!/bin/bash -# Copied from https://raw.githubusercontent.com/LeCoupa/awesome-cheatsheets/master/languages/bash.sh +# Copied from https://raw.githubusercontent.com/LeCoupa/awesome-cheatsheets/code-examples/languages/bash.sh ############################################################################## # SHORTCUTS diff --git a/general/building-mpi-applications/README.md b/general/building-mpi-applications/README.md index 620947bd5..227ff8cb7 100644 --- a/general/building-mpi-applications/README.md +++ b/general/building-mpi-applications/README.md @@ -1,3 +1,3 @@ # building-mpi-applications -See [Plexos walkthrough](https://github.com/NREL/HPC/tree/master/applications/plexos-hpc-walkthrough) as an Example. Update this readme file using the [example Plexos readme](https://github.com/NREL/HPC/blob/master/applications/plexos-hpc-walkthrough/README.md) +See [Plexos walkthrough](https://github.com/NREL/HPC/tree/code-examples/applications/plexos-hpc-walkthrough) as an Example. Update this readme file using the [example Plexos readme](https://github.com/NREL/HPC/blob/code-examples/applications/plexos-hpc-walkthrough/README.md) diff --git a/general/software-environment-basics/conda-how-to.md b/general/software-environment-basics/conda-how-to.md index cace0d974..5253ab317 100644 --- a/general/software-environment-basics/conda-how-to.md +++ b/general/software-environment-basics/conda-how-to.md @@ -11,7 +11,7 @@ Table of Contents ### Creating a custom environment -Custom environments can be created with [conda create](https://docs.conda.io/projects/conda/en/latest/commands/create.html) or [conda env create](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file). `conda create` accepts package names in the command path, whereas `conda env create` requires the use of an environment.yml file. This [environment.yml](https://github.nrel.gov/hsorense/conda-peregrine/blob/master/environment.yml) is used to create Eagle's default conda environment. It can be copied and modified for a custom enviornment. Be sure to change the name to something other than default or root, or omit it altogether and use the command line option. +Custom environments can be created with [conda create](https://docs.conda.io/projects/conda/en/latest/commands/create.html) or [conda env create](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file). `conda create` accepts package names in the command path, whereas `conda env create` requires the use of an environment.yml file. This [environment.yml](https://github.nrel.gov/hsorense/conda-peregrine/blob/code-examples/environment.yml) is used to create Eagle's default conda environment. It can be copied and modified for a custom enviornment. Be sure to change the name to something other than default or root, or omit it altogether and use the command line option. The default location for custom environments is $HOME/.conda-envs . A custom directory can be used with the command line options for path and name. Environments tend to use large amounts of disk. If you are getting messages about going over the quota but can't find where the usage is, check the environments directory and remove unused ones. diff --git a/languages/fortran/Fortran90/f90.md b/languages/fortran/Fortran90/f90.md index d9e0ffc11..6c6ccea09 100644 --- a/languages/fortran/Fortran90/f90.md +++ b/languages/fortran/Fortran90/f90.md @@ -1887,7 +1887,7 @@ end - Mutation - Nothing new in either of these files - [Source and makefile "git"](source) -- [Source and makefile "*tgz"](https://github.com/timkphd/examples/raw/master/fort/90/source/archive.tgz) +- [Source and makefile "*tgz"](https://github.com/timkphd/examples/raw/code-examples/fort/90/source/archive.tgz) - - - - - - @@ -2908,7 +2908,7 @@ end - [http://www.nsc.liu.se/~boein/f77to90/](http://www.nsc.liu.se/~boein/f77to90/) Fortran 90 for the Fortran 77 Programmer - Fortran 90 Handbook Complete ANSI/ISO Reference. Jeanne Adams, Walt Brainerd, Jeanne Martin, Brian Smith, Jerrold Wagener - Fortran 90 Programming. T. Ellis, Ivor Philips, Thomas Lahey -- [https://github.com/llvm/llvm-project/blob/master/flang/docs/FortranForCProgrammers.md](https://github.com/llvm/llvm-project/blob/master/flang/docs/FortranForCProgrammers.md) +- [https://github.com/llvm/llvm-project/blob/code-examples/flang/docs/FortranForCProgrammers.md](https://github.com/llvm/llvm-project/blob/code-examples/flang/docs/FortranForCProgrammers.md) - [FFT stuff](../mkl/) - [Fortran 95 and beyond](../95/) diff --git a/languages/julia/demos/notebooks/PyJulia_Demo.ipynb b/languages/julia/demos/notebooks/PyJulia_Demo.ipynb index 2f5d4154f..9128b9c34 100644 --- a/languages/julia/demos/notebooks/PyJulia_Demo.ipynb +++ b/languages/julia/demos/notebooks/PyJulia_Demo.ipynb @@ -30,7 +30,7 @@ "4. **Run the cells under Install PyJulia**.\n", "\n", "To run on Eagle:\n", - "1. See the instruction [here](https://github.com/NREL/HPC/blob/master/languages/python/jupyter/Kernels_and_Servers.ipynb) for running jupyter notebooks on Eagle.\n", + "1. See the instruction [here](https://github.com/NREL/HPC/blob/code-examples/languages/python/jupyter/Kernels_and_Servers.ipynb) for running jupyter notebooks on Eagle.\n", "2. See the instruction [here](../../how_to_guides/build_Julia.md) for building Julia on Eagle.\n", "3. Run the cells under Install PyJulia." ] diff --git a/languages/julia/how-to-guides/install-Julia.md b/languages/julia/how-to-guides/install-Julia.md index e4b6593e2..1e2e12c07 100644 --- a/languages/julia/how-to-guides/install-Julia.md +++ b/languages/julia/how-to-guides/install-Julia.md @@ -60,7 +60,7 @@ else: ### Prerequisites -All the [required build tools and libraries](https://github.com/JuliaLang/julia/blob/master/doc/build/build.md#required-build-tools-and-external-libraries) are available on Eagle either by default or through modules. The needed modules are covered in the instructions. +All the [required build tools and libraries](https://github.com/JuliaLang/julia/blob/code-examples/doc/build/build.md#required-build-tools-and-external-libraries) are available on Eagle either by default or through modules. The needed modules are covered in the instructions. ### Terms * `JULIA_HOME` is the base directory of julia source code (initially called `julia` after `git clone`) diff --git a/languages/python/anaconda/conda_tutorial.slides.html b/languages/python/anaconda/conda_tutorial.slides.html index 59d1f968c..8154232f0 100644 --- a/languages/python/anaconda/conda_tutorial.slides.html +++ b/languages/python/anaconda/conda_tutorial.slides.html @@ -70,7 +70,7 @@ /*! * Bootstrap v3.3.7 (http://getbootstrap.com) * Copyright 2011-2016 Twitter, Inc. - * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/code-examples/LICENSE) */ /*! normalize.css v3.0.3 | MIT License | github.com/necolas/normalize.css */ html { @@ -250,7 +250,7 @@ th { padding: 0; } -/*! Source: https://github.com/h5bp/html5-boilerplate/blob/master/src/css/main.css */ +/*! Source: https://github.com/h5bp/html5-boilerplate/blob/code-examples/src/css/main.css */ @media print { *, *:before, @@ -8411,7 +8411,7 @@ .fa-cc-visa:before { content: "\f1f0"; } -.fa-cc-mastercard:before { +.fa-cc-code-examplescard:before { content: "\f1f1"; } .fa-cc-discover:before { diff --git a/languages/python/openai_rllib/custom_gym_env/README.md b/languages/python/openai_rllib/custom_gym_env/README.md index 9efe10b04..d6c09c61d 100644 --- a/languages/python/openai_rllib/custom_gym_env/README.md +++ b/languages/python/openai_rllib/custom_gym_env/README.md @@ -75,7 +75,7 @@ Function `register_env` takes two arguments: env_name = "custom-env" register_env(env_name, lambda config: BasicEnv()) ``` -Once again, RLlib provides [detailed explanation](https://docs.ray.io/en/master/rllib-env.html) of how `register_env` works. +Once again, RLlib provides [detailed explanation](https://docs.ray.io/en/code-examples/rllib-env.html) of how `register_env` works. The `tune.run` function, instead of `args.name_env`, it uses the `env_name` defined above. diff --git a/languages/python/openai_rllib/simple-example-gpu/README.md b/languages/python/openai_rllib/simple-example-gpu/README.md index 8c2e95836..fbd8c293d 100644 --- a/languages/python/openai_rllib/simple-example-gpu/README.md +++ b/languages/python/openai_rllib/simple-example-gpu/README.md @@ -11,13 +11,13 @@ conda env create --prefix=//env_example_gpu -f env_exa ### **Only for Eagle users:** Creating Anaconda environment using Optimized Tensorflow -NREL's HPC group has recently created [a set of optimized Tensorflow drivers](https://github.com/NREL/HPC/tree/master/workshops/Optimized_TF) that maximize the efficiency of utilizing Eagle's Tesla V100 GPU units. The drivers are created for various Python 3 and Tensorflow 2.x.x versions. +NREL's HPC group has recently created [a set of optimized Tensorflow drivers](https://github.com/NREL/HPC/tree/code-examples/workshops/Optimized_TF) that maximize the efficiency of utilizing Eagle's Tesla V100 GPU units. The drivers are created for various Python 3 and Tensorflow 2.x.x versions. -The repo provides an [Anaconda environment version](https://github.com/erskordi/HPC/blob/HPC-RL/languages/python/openai_rllib/simple-example-gpu/env_example_optimized_tf.yml) for using these drivers. This environment is based on one of the [example environments](https://github.com/NREL/HPC/blob/master/workshops/Optimized_TF/py37tf22.yml) provided in the [Optimized TF repo](https://github.com/NREL/HPC/tree/master/workshops/Optimized_TF). +The repo provides an [Anaconda environment version](https://github.com/erskordi/HPC/blob/HPC-RL/languages/python/openai_rllib/simple-example-gpu/env_example_optimized_tf.yml) for using these drivers. This environment is based on one of the [example environments](https://github.com/NREL/HPC/blob/code-examples/workshops/Optimized_TF/py37tf22.yml) provided in the [Optimized TF repo](https://github.com/NREL/HPC/tree/code-examples/workshops/Optimized_TF). **The provided Anaconda environment currently works for Python 3.7, Tensorflow 2.2, and the latest Ray version** -*Make sure to follow the [instructions for installing this particular environment](https://github.com/NREL/HPC/tree/master/workshops/Optimized_TF) explicitly!* +*Make sure to follow the [instructions for installing this particular environment](https://github.com/NREL/HPC/tree/code-examples/workshops/Optimized_TF) explicitly!* ## Allocate GPU node diff --git a/languages/python/openai_rllib/simple-example/README.md b/languages/python/openai_rllib/simple-example/README.md index 896167017..4e60e27c1 100644 --- a/languages/python/openai_rllib/simple-example/README.md +++ b/languages/python/openai_rllib/simple-example/README.md @@ -2,7 +2,7 @@ RL algorithms are notorious for the amount of data they need to collect in order to learn policies. The more data collected, the better the training will be. The best way to do it is to run many Gym instances in parallel and collecting experience, and this is where RLlib assists. -[RLlib](https://docs.ray.io/en/master/rllib.html) is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications. It supports all known deep learning frameworks such as Tensorflow, Pytorch, although most parts are framework-agnostic and can be used by either one. +[RLlib](https://docs.ray.io/en/code-examples/rllib.html) is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications. It supports all known deep learning frameworks such as Tensorflow, Pytorch, although most parts are framework-agnostic and can be used by either one. The RL policy learning examples provided in this tutorial demonstrate the RLlib abilities. For convenience, the `CartPole-v0` OpenAI Gym environment will be used. @@ -15,7 +15,7 @@ Begin trainer by importing the `ray` package: import ray from ray import tune ``` -`Ray` consists of an API readily available for building [distributed applications](https://docs.ray.io/en/master/index.html). On top of it, there are several problem-solving libraries, one of which is RLlib. +`Ray` consists of an API readily available for building [distributed applications](https://docs.ray.io/en/code-examples/index.html). On top of it, there are several problem-solving libraries, one of which is RLlib. `Tune` is another one of `Ray`'s libraries for scalable hyperparameter tuning. All RLlib trainers (scripts for RL agent training) are compatible with Tune API, making experimenting easy and streamlined. @@ -81,7 +81,7 @@ tune.run( ``` That's it! The RLlib trainer is ready! -Note here that, except default hyperparameters like those above, [every RL algorithm](https://docs.ray.io/en/master/rllib-algorithms.html#available-algorithms-overview) provided by RLlib has its own hyperparameters and their default values that can be tuned in advance. +Note here that, except default hyperparameters like those above, [every RL algorithm](https://docs.ray.io/en/code-examples/rllib-algorithms.html#available-algorithms-overview) provided by RLlib has its own hyperparameters and their default values that can be tuned in advance. The code of the trainer in this example can be found [in the repo](https://github.com/erskordi/HPC/blob/HPC-RL/languages/python/openai_rllib/simple-example/simple_trainer.py). @@ -407,7 +407,7 @@ The following image shows the agent training progress, in terms of reward conver

Obviously, training using all CPU cores on a node led to faster convergence to the optimal value. -It is necessary to say here that CartPole is a simple environment where the optimal reward value (200) can be easily reached even when using a single CPU core on a personal computer. The power of using multiple cores becomes more apparent in cases of more complex environments (such as the [Atari environments](https://gym.openai.com/envs/#atari)). RLlib website also gives examples of the [scalability benefits](https://docs.ray.io/en/master/rllib-algorithms.html#ppo) for many state-of-the-art RL algorithms. +It is necessary to say here that CartPole is a simple environment where the optimal reward value (200) can be easily reached even when using a single CPU core on a personal computer. The power of using multiple cores becomes more apparent in cases of more complex environments (such as the [Atari environments](https://gym.openai.com/envs/#atari)). RLlib website also gives examples of the [scalability benefits](https://docs.ray.io/en/code-examples/rllib-algorithms.html#ppo) for many state-of-the-art RL algorithms. **Supplemental notes:** As you noticed, when using RLlib for RL traning, there is no need to `import gym`, as we did in the non-training example, because RLlib recognizes automatically all benchmark OpenAI Gym environments. Even when you create your own custom-made Gym environments, RLlib provides proper functions with which you can register your environment before training. diff --git a/languages/python/pyomo/README.md b/languages/python/pyomo/README.md index 0da340221..1ee27e09d 100644 --- a/languages/python/pyomo/README.md +++ b/languages/python/pyomo/README.md @@ -72,7 +72,7 @@ five major components: Pyomo has modeling objects for each of these components (as well as a few extra). Below we demonstrate their use on a the [p-median problem](https://en.wikipedia.org/wiki/Facility_location_problem) adapted from -[this example](https://github.com/Pyomo/PyomoGallery/blob/master/p_median/p-median.py) +[this example](https://github.com/Pyomo/PyomoGallery/blob/code-examples/p_median/p-median.py) utilizing a `ConcreteModel` and demonstrating some of the modeling flexibility in Pyomo. This example is also available as a [stand-alone python module](./p_median.py). ```python @@ -138,7 +138,7 @@ to use an external solver (linked through Pyomo) to *solve* or *optimize* this model. A more complex example `ConcreteModel` utilizing data brought in from a json -file is available [here](https://github.com/power-grid-lib/pglib-uc/blob/master/uc_model.py). +file is available [here](https://github.com/power-grid-lib/pglib-uc/blob/code-examples/uc_model.py). # Solvers diff --git a/languages/python/pyomo/p_median.py b/languages/python/pyomo/p_median.py index ca144af6e..c318e08d5 100644 --- a/languages/python/pyomo/p_median.py +++ b/languages/python/pyomo/p_median.py @@ -1,4 +1,4 @@ -# Adapted from: https://github.com/Pyomo/PyomoGallery/blob/master/p_median/p-median.py +# Adapted from: https://github.com/Pyomo/PyomoGallery/blob/code-examples/p_median/p-median.py import pyomo.environ as pyo import random