Skip to content

Commit

Permalink
Code Spell (AMReX-Codes#3563)
Browse files Browse the repository at this point in the history
Add a github action and fix various spellings.

---------

Co-authored-by: Jean M. Sexton <[email protected]>
  • Loading branch information
WeiqunZhang and jmsexton03 authored Sep 28, 2023
1 parent d45933c commit 6c8122d
Show file tree
Hide file tree
Showing 53 changed files with 153 additions and 94 deletions.
33 changes: 33 additions & 0 deletions .codespell-ignore-words
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
abot
alo
apoints
asend
ba
bloc
blocs
boxs
cant
ccache
clen
compex
couldnt
fromm
frop
geometrys
hist
hsi
infor
inout
ist
lsit
nd
nineth
parm
parms
pres
ptd
recuse
siz
structed
te
thi
3 changes: 3 additions & 0 deletions .codespellrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[codespell]
skip = .git,*.ipynb,*.bib,*.ps,*.patch,*~,CHANGES,*/Extern/SWFFT,*/Extern/hpgmg,./tmp_install_dir,./installdir,*/build,*/tmp_build_dir
ignore-words = .codespell-ignore-words
4 changes: 2 additions & 2 deletions .github/workflows/cleanup-cache-postpr.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ on:

jobs:
CleanUpCcacheCachePostPR:
name: Clean Up Ccahe Cache Post PR
name: Clean Up Ccache Cache Post PR
runs-on: ubuntu-latest
permissions:
actions: write
Expand All @@ -17,7 +17,7 @@ jobs:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v3
- name: Clean up ccahe
- name: Clean up ccache
run: |
gh extension install actions/gh-actions-cache
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/cleanup-cache.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ on:

jobs:
CleanUpCcacheCache:
name: Clean Up Ccahe Cache for ${{ github.event.workflow_run.name }}
name: Clean Up Ccache Cache for ${{ github.event.workflow_run.name }}
runs-on: ubuntu-latest
permissions:
actions: write
Expand All @@ -17,7 +17,7 @@ jobs:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v3
- name: Clean up ccahe
- name: Clean up ccache
run: |
gh extension install actions/gh-actions-cache
Expand Down
23 changes: 23 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
name: codespell

on: [push, pull_request]

concurrency:
group: ${{ github.ref }}-${{ github.head_ref }}-codespell
cancel-in-progress: true

jobs:
codespell:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3

- name: Install codespell
run: |
sudo apt-get update
sudo apt-get install -y --no-install-recommends python3-pip
pip3 install --user codespell
- name: Run codespell
run: codespell
2 changes: 1 addition & 1 deletion Docs/sphinx_documentation/source/AmrCore.rst
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ Within AMReX_Interpolater.cpp/H are the derived classes:

- :cpp:`FaceLinear`

- :cpp:`FaceDivFree`: This is more accurately a divergence-preserving interpolation on face centered data, i.e., it ensures the divergence of the fine ghost cells match the value of the divergence of the underlying coarse cell. All fine cells overlying a given coarse cell will have the same divergence, even when the coarse grid divergence is spatially varying. Note that when using this with :cpp:`FillPatch` for time sub-cycling, the coarse grid times may not match the fine grid time, in which case :cpp:`FillPatch` will create coarse values at the fine time before calling this interpolation and the result of the :cpp:`FillPatch` is *not* garanteed to preserve the original divergence.
- :cpp:`FaceDivFree`: This is more accurately a divergence-preserving interpolation on face centered data, i.e., it ensures the divergence of the fine ghost cells match the value of the divergence of the underlying coarse cell. All fine cells overlying a given coarse cell will have the same divergence, even when the coarse grid divergence is spatially varying. Note that when using this with :cpp:`FillPatch` for time sub-cycling, the coarse grid times may not match the fine grid time, in which case :cpp:`FillPatch` will create coarse values at the fine time before calling this interpolation and the result of the :cpp:`FillPatch` is *not* guaranteed to preserve the original divergence.

These Interpolaters can be executed on CPU or GPU, with certain limitations:

Expand Down
6 changes: 3 additions & 3 deletions Src/AmrCore/AMReX_FillPatcher.H
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ namespace amrex {
* (1) This class is for filling data during time stepping, not during
* regrid. The fine level data passed as input must have the same BoxArray
* and DistributionMapping as the destination. It's OK they are the same
* MultiFab. For AmrLevel based codes, AmrLevel::FillPatcherFill wil try to
* MultiFab. For AmrLevel based codes, AmrLevel::FillPatcherFill will try to
* use FillPatcher if it can, and AmrLevel::FillPatch will use the fillpatch
* functions.
*
Expand Down Expand Up @@ -64,7 +64,7 @@ namespace amrex {
* AMR levels. This operation at the coarse/fine boundary is non-trivial
* for RK orders higher than 2. Note that it is expected that time stepping
* on the coarse level is perform before any fine level time stepping, and
* it's the user's reponsibility to properly create and destroy this object.
* it's the user's responsibility to properly create and destroy this object.
* See AmrLevel::RK for an example of using the RungeKutta functions and
* FillPatcher together.
*/
Expand Down Expand Up @@ -419,7 +419,7 @@ void FillPatcher<MF>::storeRKCoarseData (Real /*time*/, Real dt, MF const& S_old
auto const& fpc = getFPinfo();

for (auto& tmf : m_cf_crse_data) {
tmf.first = std::numeric_limits<Real>::lowest(); // because we dont' need it
tmf.first = std::numeric_limits<Real>::lowest(); // because we don't need it
tmf.second = std::make_unique<MF>(make_mf_crse_patch<MF>(fpc, m_ncomp));
}
m_cf_crse_data[0].second->ParallelCopy(S_old, m_cgeom.periodicity());
Expand Down
6 changes: 3 additions & 3 deletions Src/AmrCore/AMReX_FluxRegister.H
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ public:
const Geometry& geom);

/**
* \brief /in this version the area is assumed to muliplied into the flux (if not, use scale to fix)
* \brief /in this version the area is assumed to multiplied into the flux (if not, use scale to fix)
*
* \param mflx
* \param dir
Expand All @@ -192,7 +192,7 @@ public:
/**
* \brief Increment flux correction with fine data.
*
* /in this version the area is assumed to muliplied into the flux (if not, use scale to fix)
* /in this version the area is assumed to multiplied into the flux (if not, use scale to fix)
*
* \param mflx
* \param dir
Expand Down Expand Up @@ -230,7 +230,7 @@ public:
/**
* \brief Increment flux correction with fine data.
*
* in this version the area is assumed to muliplied into the flux (if not, use scale to fix)
* in this version the area is assumed to multiplied into the flux (if not, use scale to fix)
*
* \param flux
* \param dir
Expand Down
6 changes: 3 additions & 3 deletions Src/Base/AMReX.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -408,8 +408,8 @@ amrex::Initialize (int& argc, char**& argv, bool build_parm_parse,
else
{
// This counts command line arguments before a "--"
// and only sends the preceeding arguments to ParmParse;
// the rest get ingored.
// and only sends the preceding arguments to ParmParse;
// the rest get ignored.
int ppargc = 1;
for (; ppargc < argc; ++ppargc) {
if (std::strcmp(argv[ppargc], "--") == 0) { break; }
Expand Down Expand Up @@ -514,7 +514,7 @@ amrex::Initialize (int& argc, char**& argv, bool build_parm_parse,
pp.queryAdd("handle_sigfpe" , system::handle_sigfpe );
pp.queryAdd("handle_sigill" , system::handle_sigill );

// We save the singal handlers and restore them in Finalize.
// We save the signal handlers and restore them in Finalize.
if (system::handle_sigsegv) {
prev_handler_sigsegv = std::signal(SIGSEGV, BLBackTrace::handler);
} else {
Expand Down
2 changes: 1 addition & 1 deletion Src/Base/AMReX_ANSIEscCode.H
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ namespace Font {
constexpr char RapidBlink [] = "\033[6m";
}

namespace FGColor { // Forground colors
namespace FGColor { // Foreground colors
constexpr char Black [] = "\033[30m";
constexpr char Red [] = "\033[31m";
constexpr char Green [] = "\033[32m";
Expand Down
2 changes: 1 addition & 1 deletion Src/Base/AMReX_Arena.H
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ public:
/**
* \brief Does the device have enough free memory for allocating this
* much memory? For CPU builds, this always return true. This is not a
* const funciton because it may attempt to release memory back to the
* const function because it may attempt to release memory back to the
* system.
*/
[[nodiscard]] virtual bool hasFreeDeviceMemory (std::size_t sz);
Expand Down
2 changes: 1 addition & 1 deletion Src/Base/AMReX_BLProfiler.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1281,7 +1281,7 @@ void BLProfiler::WriteCommStats(bool bFlushing, bool memCheck)
nfiHeader.Stream() << "CommProfProc " << myProc
<< " nCommStats " << vCommStats.size()
<< " datafile " << localDFileName
<< " seekpos " << nfiDatafile.SeekPos() // ---- data file seek posotion
<< " seekpos " << nfiDatafile.SeekPos() // ---- data file seek position
<< " " << procName << '\n';
for(int ib(0); ib < CommStats::barrierNames.size(); ++ib) {
int seekindex(CommStats::barrierNames[ib].second);
Expand Down
2 changes: 1 addition & 1 deletion Src/Base/AMReX_CArena.H
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ public:
/**
* \brief Does the device have enough free memory for allocating this
* much memory? For CPU builds, this always return true. This is not a
* const funciton because it may attempt to release memory back to the
* const function because it may attempt to release memory back to the
* system.
*/
[[nodiscard]] bool hasFreeDeviceMemory (std::size_t sz) final;
Expand Down
2 changes: 1 addition & 1 deletion Src/Base/AMReX_CTOParallelForImpl.H
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ ParallelFor (TypeList<CTOs...> /*list_of_compile_time_options*/,
*
* \param ctos list of all possible values of the parameters.
* \param option the run time parameters.
* \param N an interger specifying the 1D for loop's range.
* \param N an integer specifying the 1D for loop's range.
* \param f a callable object taking an integer and working on that iteration.
*/
template <typename T, class F, typename... CTOs>
Expand Down
2 changes: 1 addition & 1 deletion Src/Base/AMReX_DistributionMapping.H
Original file line number Diff line number Diff line change
Expand Up @@ -405,7 +405,7 @@ DistributionMapping MakeSimilarDM (const BoxArray& ba, const MultiFab& mf, const
* of overlap with.
*
* @param[in] ba The BoxArray we want to generate a DistributionMapping for.
* @param[in] src_ba The BoxArray associatied with the src DistributionMapping.
* @param[in] src_ba The BoxArray associated with the src DistributionMapping.
* @param[in] src_dm The input DistributionMapping we want the output to be similar to.
* @param[in] ng The number of grow cells to use when computing intersection / overlap
* @return The computed DistributionMapping.
Expand Down
6 changes: 3 additions & 3 deletions Src/Base/AMReX_FabArray.H
Original file line number Diff line number Diff line change
Expand Up @@ -723,7 +723,7 @@ public:
*
* \param comp component
* \param nghost number of ghost cells
* \param local If true, MPI communciation is skipped.
* \param local If true, MPI communication is skipped.
*/
template <typename F=FAB, std::enable_if_t<IsBaseFab<F>::value,int> = 0>
typename F::value_type
Expand Down Expand Up @@ -1205,7 +1205,7 @@ public:
* \param comp starting component
* \param ncomp number of components
* \param nghost number of ghost cells
* \param local If true, MPI communciation is skipped.
* \param local If true, MPI communication is skipped.
* \param ignore_covered ignore covered cells. Only relevant for cell-centered EB data.
*/
template <typename F=FAB, std::enable_if_t<IsBaseFab<F>::value,int> = 0>
Expand All @@ -1220,7 +1220,7 @@ public:
* \param comp starting component
* \param ncomp number of components
* \param nghost number of ghost cells
* \param local If true, MPI communciation is skipped.
* \param local If true, MPI communication is skipped.
*/
template <typename IFAB, typename F=FAB, std::enable_if_t<IsBaseFab<F>::value,int> = 0>
typename F::value_type
Expand Down
2 changes: 1 addition & 1 deletion Src/Base/AMReX_FabArrayUtility.H
Original file line number Diff line number Diff line change
Expand Up @@ -1547,7 +1547,7 @@ indexFromValue (FabArray<FAB> const& mf, int comp, IntVect const& nghost,
* \param ycomp starting component of y
* \param ncomp number of components
* \param nghost number of ghost cells
* \param local If true, MPI communciation is skipped.
* \param local If true, MPI communication is skipped.
*/
template <typename FAB, std::enable_if_t<IsBaseFab<FAB>::value,int> FOO = 0>
typename FAB::value_type
Expand Down
4 changes: 2 additions & 2 deletions Src/Base/AMReX_GpuControl.H
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ namespace Gpu {
}

/**
* This struct provides a RAII-style mechansim for changing the number
* This struct provides a RAII-style mechanism for changing the number
* of streams returned by Gpu::numStreams() to a single stream.
*/
struct [[nodiscard]] SingleStreamRegion
Expand All @@ -166,7 +166,7 @@ namespace Gpu {
};

/**
* This struct provides a RAII-style mechansim for disabling GPU
* This struct provides a RAII-style mechanism for disabling GPU
* synchronization in MFITer by default. Note that explicit calls to
* Gpu::steramSynchronize and Gpu::deviceSynchronize still work.
*/
Expand Down
2 changes: 1 addition & 1 deletion Src/Base/AMReX_IntVect.H
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ public:

/**
* \brief Construct an IntVect from an Vector<int>. It is an error if
* the Vector<int> doesn' t have the same dimension as this
* the Vector<int> doesn't have the same dimension as this
* IntVect.
*/
explicit IntVect (const Vector<int>& a) noexcept : vect{AMREX_D_DECL(a[0],a[1],a[2])} {
Expand Down
2 changes: 1 addition & 1 deletion Src/Base/AMReX_MultiFabUtilI.H
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ namespace amrex::MFUtil {


//! Regrid (by duplication) class T (which must be either of type
//! MultiFab or iMultiFab) usign the new BoxArray and
//! MultiFab or iMultiFab) using the new BoxArray and
//! DistributionMapping. The boolean flag `regrid_ghost` switched
//! between the SymmetricGhost and AsymmetricGhost copy functions.
template<typename T>
Expand Down
2 changes: 1 addition & 1 deletion Src/Base/AMReX_PODVector.H
Original file line number Diff line number Diff line change
Expand Up @@ -422,7 +422,7 @@ namespace amrex

iterator insert (const_iterator a_pos, T&& a_item)
{
// This is *POD* vector afterall
// This is *POD* vector after all
return insert(a_pos, 1, a_item);
}

Expand Down
18 changes: 9 additions & 9 deletions Src/Base/AMReX_ParReduce.H
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ namespace amrex {
* the second MultiFab.
\verbatim
auto const& ma1 = mf1.const_arrays();
auot const& ma2 = mf2.const_arrays();
auto const& ma2 = mf2.const_arrays();
GpuTuple<Real,Real> mm = ParReduce(TypeList<ReduceOpMin,ReduceOpMax>{},
TypeList<Real,Real>{},
mf1, mf1.nGrowVect(),
Expand All @@ -29,7 +29,7 @@ namespace amrex {
* ReduceOpLogicalAnd, and ReduceOpLogicalOr)
* \tparam Ts... data types (e.g., Real, int, etc.)
* \tparam FAB MultiFab/FabArray type
* \tparam F callable type like a lambda funcion
* \tparam F callable type like a lambda function
*
* \param operation_list list of reduce operators
* \param type_list list of data types
Expand Down Expand Up @@ -79,7 +79,7 @@ ParReduce (TypeList<Ops...> operation_list, TypeList<Ts...> type_list,
* ReduceOpLogicalAnd, and ReduceOpLogicalOr)
* \tparam T data type (e.g., Real, int, etc.)
* \tparam FAB MultiFab/FabArray type
* \tparam F callable type like a lambda funcion
* \tparam F callable type like a lambda function
*
* \param operation_list a reduce operator stored in TypeList
* \param type_list a data type stored in TypeList
Expand Down Expand Up @@ -113,7 +113,7 @@ ParReduce (TypeList<Op> operation_list, TypeList<T> type_list,
* and the maximum of the second MultiFab.
\verbatim
auto const& ma1 = mf1.const_arrays();
auot const& ma2 = mf2.const_arrays();
auto const& ma2 = mf2.const_arrays();
GpuTuple<Real,Real> mm = ParReduce(TypeList<ReduceOpMin,ReduceOpMax>{},
TypeList<Real,Real>{},
mf1, mf1.nGrowVect(), mf1.nComp(),
Expand All @@ -128,7 +128,7 @@ ParReduce (TypeList<Op> operation_list, TypeList<T> type_list,
* ReduceOpLogicalAnd, and ReduceOpLogicalOr)
* \tparam Ts... data types (e.g., Real, int, etc.)
* \tparam FAB MultiFab/FabArray type
* \tparam F callable type like a lambda funcion
* \tparam F callable type like a lambda function
*
* \param operation_list list of reduce operators
* \param type_list list of data types
Expand Down Expand Up @@ -175,7 +175,7 @@ ParReduce (TypeList<Ops...> operation_list, TypeList<Ts...> type_list,
* ReduceOpLogicalAnd, and ReduceOpLogicalOr)
* \tparam T data type (e.g., Real, int, etc.)
* \tparam FAB MultiFab/FabArray type
* \tparam F callable type like a lambda funcion
* \tparam F callable type like a lambda function
*
* \param operation_list a reduce operator stored in TypeList
* \param type_list a data type stored in TypeList
Expand Down Expand Up @@ -210,7 +210,7 @@ ParReduce (TypeList<Op> operation_list, TypeList<T> type_list,
* the second MultiFab.
\verbatim
auto const& ma1 = mf1.const_arrays();
auot const& ma2 = mf2.const_arrays();
auto const& ma2 = mf2.const_arrays();
GpuTuple<Real,Real> mm = ParReduce(TypeList<ReduceOpMin,ReduceOpMax>{},
TypeList<Real,Real>{}, mf1,
[=] AMREX_GPU_DEVICE (int box_no, int i, int j, int k) noexcept
Expand All @@ -224,7 +224,7 @@ ParReduce (TypeList<Op> operation_list, TypeList<T> type_list,
* ReduceOpLogicalAnd, and ReduceOpLogicalOr)
* \tparam Ts... data types (e.g., Real, int, etc.)
* \tparam FAB MultiFab/FabArray type
* \tparam F callable type like a lambda funcion
* \tparam F callable type like a lambda function
*
* \param operation_list list of reduce operators
* \param type_list list of data types
Expand Down Expand Up @@ -268,7 +268,7 @@ ParReduce (TypeList<Ops...> operation_list, TypeList<Ts...> type_list,
* ReduceOpLogicalAnd, and ReduceOpLogicalOr)
* \tparam T data type (e.g., Real, int, etc.)
* \tparam FAB MultiFab/FabArray type
* \tparam F callable type like a lambda funcion
* \tparam F callable type like a lambda function
*
* \param operation_list a reduce operator stored in TypeList
* \param type_list a data type stored in TypeList
Expand Down
Loading

0 comments on commit 6c8122d

Please sign in to comment.