Skip to content

baperry2/PeleC

 
 

Repository files navigation

PeleC: An adaptive mesh refinement solver for compressible reacting flows

Documentation | Nightly Test Results | PeleC Citation | Pele Citation

Getting Started

To compile and run PeleC, one needs a C++ compiler that supports the C++17 standard. A hierarchical strategy for parallelism is supported, based on MPI, MPI + OpenMP, or MPI + GPU (CUDA/HIP/DPC++). The code should work with all major MPI and OpenMP implementations. PeleC should build and run with no modifications to the make system if using a Linux system with the GNU compilers, version 7 and above. CMake, although used mostly for testing, is also an option for building the code.

To build PeleC (using the default submodules for AMReX, PelePhysics, and SUNDIALS) and run a sample 3D flame problem:

git clone --recursive [email protected]:AMReX-Combustion/PeleC.git
cd PeleC/Exec/RegTests/PMF
make TPLrealclean && make realclean && make TPL && make -j
./Pele3d.xxx.yyy.ex example.inp
  1. In the exec line above, xxx.yyy is a tag identifying your compiler and various build options, and will vary across pltaform. (Note that GNU compilers must be at least version 7, and MPI should be at least of standard version 3).
  2. The example is a 3D premixed flame, flowing vertically upward through the domain with no gravity. The lateral boundaries are periodic. A detailed hydrogen model is used. The solution is initialized with a wrinkled (perturbed) 2D steady flame solution computed using the PREMIX code. Two levels of solution-adaptive refinement are automatically triggered by the presence of the flame intermediate, HO2.
  3. In addition to informative output to the terminal, periodic plotfiles are written in the run folder. These may be viewed with AMReX's Amrvis, VisIt, or ParaView:
    1. In VisIt, direct the File->Open dialogue to select the file named "Header" that is inside each plotfile folder.
    2. In ParaViuew, navigate to the case directory, open the plotfile folder.
    3. With Amrvis, $ amrvis3d plt00030, for example.

Dependencies

PeleC is built on the AMReX and PelePhysics libraries. PeleC also requires the SUNDIALS ODE solver library.

Development model

To add a new feature to PeleC, the procedure is:

  1. Create a branch for the new feature (locally):

    git checkout -b AmazingNewFeature
    
  2. Develop the feature, merging changes often from the development branch into your AmazingNewFeature branch:

    git commit -m "Developed AmazingNewFeature"
    git checkout development
    git pull                      # fix any identified conflicts between local and remote branches of "development"
    git checkout AmazingNewFeature
    git rebase development        # fix any identified conflicts between "development" and "AmazingNewFeature"
    
  3. Build and run

    1. Build and run the full test suite using CMake and CTest (See the Build directory for an example script). Please do not introduce warnings. PeleC is checked against clang-tidy and cppcheck in the CI. To use cppcheck and clang-tidy locally use these CMake options:

      -DPELE_ENABLE_CLANG_TIDY:BOOL=ON
      -DPELE_ENABLE_CPPCHECK:BOOL=ON
      
    2. Run clang-tidy by using an LLVM compiler and making sure clang-tidy is found during configure. Then make will run clang-tidy along with compilation. Once verifying cppcheck was found during configure, using the make cppcheck target should run its checks on the compile_commands.json database generated by CMake. More information on these checks can be seen in the CI files used for GitHub Actions in the .github/workflows directory.

    3. To easily format all source files before commit, use the following command:

      find Source Exec \( -name "*.cpp" -o -name "*.H" \) -exec clang-format -i {} +
      
  4. If you don't already have a fork of the PeleC repository, follow the Github instructions to create one. Then, push a feature branch to your forked PeleC repository:

    git remote add remotename [email protected]:remoteurl # add a remote pointing to the user's fork
    git push -u remotename AmazingNewFeature # Note: -u option required only for the first push of new branch
    
  5. Submit a pull request through [email protected]:AMReX-Combustion/PeleC.git, and make sure you are requesting a merge against the development branch

  6. Check the CI status on Github and make sure the tests passed for merge request

Note

Github CI uses the CMake build system and CTest to test the core source files of PeleC. If you are adding source files, you will need to add them to the list of source files in the CMake directory for the tests to pass. Make sure to add them to the GNU make makefiles as well.

Test Status

Nightly test results for PeleC against multiple compilers and machines can be seen on its CDash page.

Documentation

The full documentation for Pele exists in the Docs directory; at present this is maintained inline using Sphinx Sphinx. With Sphinx, documentation is written in Restructured Text. reST is a markup language similar to Markdown, but with somewhat greater capabilities (and idiosyncrasies). There are several primers available to get started. One gotcha is that indentation matters. To build

$ cd Docs && mkdir build && cd build && sphinx-build -M html ../sphinx .

Citation

To cite the PeleC software and refer to its computational performance, use the following journal articles for PeleC and the Pele software suite:

@article{PeleC_IJHPCA,
  author = {Marc T {Henry de Frahan} and Jon S Rood and Marc S Day and Hariswaran Sitaraman and Shashank Yellapantula and Bruce A Perry and Ray W Grout and Ann Almgren and Weiqun Zhang and John B Bell and Jacqueline H Chen},
  title = {{PeleC: An adaptive mesh refinement solver for compressible reacting flows}},
  journal = {The International Journal of High Performance Computing Applications},
  volume = {37},
  number = {2},
  pages = {115-131},
  year = {2022},
  doi = {10.1177/10943420221121151},
  url = {https://doi.org/10.1177/10943420221121151}
}

@article{PeleSoftware,
  author = {Marc T. {Henry de Frahan} and Lucas Esclapez and Jon Rood and Nicholas T. Wimer and Paul Mullowney and Bruce A. Perry and Landon Owen and Hariswaran Sitaraman and Shashank Yellapantula and Malik Hassanaly and Mohammad J. Rahimi and Michael J. Martin and Olga A. Doronina and Sreejith N. A. and Martin Rieth and Wenjun Ge and Ramanan Sankaran and Ann S. Almgren and Weiqun Zhang and John B. Bell and Ray Grout and Marc S. Day and Jacqueline H. Chen},
  title = {The Pele Simulation Suite for Reacting Flows at Exascale},
  booktitle = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing},
  journal = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing},
  chapter = {},
  pages = {13-25},
  doi = {10.1137/1.9781611977967.2},
  URL = {https://epubs.siam.org/doi/abs/10.1137/1.9781611977967.2},
  eprint = {https://epubs.siam.org/doi/pdf/10.1137/1.9781611977967.2},
  year = {2024},
  publisher = {Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing}
}

Additionally, to cite the application of PeleC to compressible reacting flows, use the following Combustion and Flame journal article:

@article{Sitaraman2021,
  author = {Hariswaran Sitaraman and Shashank Yellapantula and Marc T. {Henry de Frahan} and Bruce Perry and Jon Rood and Ray Grout and Marc Day},
  title = {Adaptive mesh based combustion simulations of direct fuel injection effects in a supersonic cavity flame-holder},
  journal = {Combustion and Flame},
  volume = {232},
  pages = {111531},
  year = {2021},
  issn = {0010-2180},
  doi = {https://doi.org/10.1016/j.combustflame.2021.111531},
  url = {https://www.sciencedirect.com/science/article/pii/S0010218021002741},
}

Acknowledgment

This research was supported by the Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations -- the Office of Science and the National Nuclear Security Administration -- responsible for the planning and preparation of a capable exascale ecosystem -- including software, applications, hardware, advanced system engineering, and early testbed platforms -- to support the nation's exascale computing imperative.

About

An AMR code for compressible reacting flow simulations

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 85.1%
  • Python 7.2%
  • CMake 4.7%
  • Makefile 2.0%
  • Other 1.0%