Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Add OpenPMD support #1050

Open
wants to merge 94 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 39 commits
Commits
Show all changes
94 commits
Select commit Hold shift + click to select a range
49027c7
Add OpenPMD as external lib
pgrete Jan 8, 2024
4525e86
Add OpenPMD skeleto
pgrete Jan 11, 2024
4de250c
WIP more Open PMD
pgrete Jan 11, 2024
b906af7
WIP OpenPMD use file id
pgrete Jan 12, 2024
108fc7a
Merge branch 'develop' into pgrete/pmd-output
pgrete Feb 29, 2024
79660a2
Write blocks
pgrete Feb 29, 2024
372b585
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Mar 8, 2024
8d40c91
Centralize getting var info for output
pgrete Mar 8, 2024
4f20c26
WIP openpmd, chunks don't work yet plus check dimensionality
pgrete Mar 8, 2024
4dde705
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Mar 14, 2024
f29e8d1
Fix chunk extents
pgrete Mar 14, 2024
a501a5d
Write Attributes
pgrete Mar 15, 2024
c0be75c
Rename restart to restart_hdf5
pgrete Mar 15, 2024
7f03528
WIP abstract RestartReader
pgrete Mar 18, 2024
56795f2
WIP separating RestartReader
pgrete Mar 18, 2024
8587303
Make RestartReader abstract
pgrete Mar 19, 2024
8bf955b
Merge branch 'pgrete/refactor-restart' into pgrete/pmd-output
pgrete Mar 20, 2024
e3ea8d7
Add OpenPMD restart skeleton
pgrete Mar 20, 2024
33b6261
WIP updating loc logic
pgrete Mar 20, 2024
788118c
Merge branch 'develop' into pgrete/pmd-output
pgrete Apr 11, 2024
62e54da
Fix interface from recent changes in develop
pgrete Apr 11, 2024
6309663
Read and Write loc
pgrete Apr 11, 2024
7224f42
Houston, we have a build
pgrete Apr 11, 2024
ae1f241
Added OpenPMD restart ReadBlocks
pgrete Apr 12, 2024
19863d7
Fix loc level
pgrete Apr 12, 2024
e2b2bd1
Make Series persistent and fix rootlevel typo
pgrete Apr 15, 2024
f9373e8
WIP Read/Write Params
pgrete Apr 15, 2024
96a3f4c
Make ReadParams private member
pgrete Apr 15, 2024
61306a0
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Apr 18, 2024
56976a8
Fix root level in output
pgrete Apr 18, 2024
6199843
Move to mesh per record standard for writing
pgrete Apr 22, 2024
9fb9f68
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Apr 23, 2024
684b7ab
Allow for 2D and 3D output. Fix single dataset reset.
pgrete Apr 23, 2024
251c6ea
Fix logical loc
pgrete Apr 23, 2024
03f80c7
Rename opmd files
pgrete Apr 24, 2024
890fffe
Separate common calls to chunks and names
pgrete Apr 24, 2024
04359d3
Reuse shared chunk and name for restarts
pgrete Apr 24, 2024
2b89659
Add regression test
pgrete Apr 24, 2024
804e60d
Somewhat make restarts working
pgrete Apr 24, 2024
a436f55
Fix order of arguments for correct flush
pgrete Apr 26, 2024
6a3a80d
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Apr 26, 2024
28d725d
Fix handling of output variable names
pgrete Apr 26, 2024
a039ea1
Fix reading chunks for sparsely populated output files
pgrete Apr 26, 2024
ae8519f
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Apr 29, 2024
7476641
Dont tell anyone I spent days on this...
pgrete May 3, 2024
39d4b99
Temp disable dumping Views from device
pgrete May 3, 2024
d2ba882
Merge branch 'develop' into pgrete/pmd-output
pgrete May 22, 2024
626303d
Merge branch 'pgrete/fix-hasghost-restart' into pgrete/pmd-output
pgrete Jun 12, 2024
4241198
Dump deref cnt in opmd restart
pgrete Jun 12, 2024
101ebf2
Merge branch 'develop' into pgrete/pmd-output
BenWibking Jun 18, 2024
60a38b2
install openpmd in macOS CI
BenWibking Jun 18, 2024
461eeaa
Merge branch 'develop' into pgrete/pmd-output
BenWibking Jun 21, 2024
cd007d3
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Jun 25, 2024
28020db
Remove extraneous popRegion
pgrete Jun 25, 2024
5324e00
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Jul 11, 2024
af4b966
Fix formatting
pgrete Jul 11, 2024
0e015d1
Make format clang16 compatible
pgrete Jul 11, 2024
5135aea
Fix default backend_config parsing
pgrete Jul 12, 2024
b344291
another attempt
pgrete Jul 12, 2024
7486915
Bump OpenPMD version
pgrete Jul 12, 2024
0e3f758
pmd: Write scalar particle data
pgrete Jul 25, 2024
e39ef61
Code dedup
pgrete Jul 25, 2024
b938511
Allow writing non-scalar particles
pgrete Jul 25, 2024
08d6b41
Make positions standard compliant
pgrete Jul 25, 2024
bf74c7e
Allow for particles restarts (serial works)
pgrete Jul 25, 2024
3b37c26
Support particle restarts in parallel
pgrete Jul 26, 2024
0ca5b6f
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Jul 26, 2024
b30788a
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Aug 6, 2024
5f95466
Add now prefix to pmd outputs
pgrete Aug 6, 2024
6811594
Make linter happy
pgrete Aug 7, 2024
c0d7f11
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Sep 2, 2024
b2d7525
Bump OPMD version and add delim
pgrete Sep 4, 2024
f229274
Merge branch 'develop' into pgrete/pmd-output
pgrete Sep 26, 2024
3283635
Change delim. Is this stupid?
pgrete Sep 26, 2024
d59d573
Merge branch 'develop' into pgrete/pmd-output
pgrete Oct 7, 2024
3f95fc4
Make params IO test more flexible
pgrete Oct 7, 2024
1141ff3
Add test case for opmd params IO
pgrete Oct 7, 2024
e6e4d0b
Reading/writing non-ParArray Params works
pgrete Oct 7, 2024
8230f94
Allow writing ParArray and Views to Params
pgrete Oct 8, 2024
e64ae7e
Read ParArray/View from opmd raw
pgrete Oct 8, 2024
b789711
Make basic parsing of Params work
pgrete Oct 8, 2024
466ecd2
Restore view from opmd params
pgrete Oct 8, 2024
3aa4c78
Fix manual pararray reading
pgrete Oct 8, 2024
4c1c70f
Make linter happy?
pgrete Oct 8, 2024
8a1850b
Make restoring HostViews possible
pgrete Oct 9, 2024
1f02ffa
Make reading host arrays work using the direct interface in the unit …
pgrete Oct 9, 2024
d6565c5
Expands types for read/write to all vec
pgrete Oct 9, 2024
6065213
Remove debug info
pgrete Oct 9, 2024
4bf6a92
Merge branch 'develop' into pgrete/pmd-output
BenWibking Oct 16, 2024
e67f589
Merge branch 'develop' into pgrete/pmd-output
pgrete Nov 9, 2024
503a5b6
Fix bc interface from PR 1177
pgrete Nov 9, 2024
d064f03
Merge branch 'develop' into pgrete/pmd-output
BenWibking Nov 22, 2024
dfbcb27
Merge branch 'develop' into pgrete/pmd-output
pgrete Nov 29, 2024
4769a63
Fix output numbering for triggered opmd outputs
pgrete Nov 29, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 21 additions & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,12 @@ include(CTest)
# Compile time constants

# Compile Options
option(PARTHENON_SINGLE_PRECISION "Run in single precision" OFF)
option(PARTHENON_SINGLE_PRENSION "Run in single precision" OFF)
pgrete marked this conversation as resolved.
Show resolved Hide resolved
option(PARTHENON_DISABLE_MPI "MPI is enabled by default if found, set this to True to disable MPI" OFF)
option(PARTHENON_ENABLE_HOST_COMM_BUFFERS "CUDA/HIP Only: Allocate communication buffers on host (may be slower)" OFF)
option(PARTHENON_DISABLE_HDF5 "HDF5 is enabled by default if found, set this to True to disable HDF5" OFF)
option(PARTHENON_DISABLE_HDF5_COMPRESSION "HDF5 compression is enabled by default, set this to True to disable compression in HDF5 output/restart files" OFF)
option(PARTHENON_ENABLE_OPENPMD "OpenPMD is enabled by default if found, set this to True to disable OpenPMD" ON)
option(PARTHENON_DISABLE_SPARSE "Sparse capability is enabled by default, set this to True to compile-time disable all sparse capability" OFF)
option(PARTHENON_ENABLE_ASCENT "Enable Ascent for in situ visualization and analysis" OFF)
option(PARTHENON_LINT_DEFAULT "Linting is turned off by default, use the \"lint\" target or set \
Expand Down Expand Up @@ -187,6 +188,25 @@ if (NOT PARTHENON_DISABLE_HDF5)
install(TARGETS HDF5_C EXPORT parthenonTargets)
endif()

if (PARTHENON_ENABLE_OPENPMD)
#TODO(pgrete) add logic for serial/parallel
#TODO(pgrete) add logic for internal/external build
include(FetchContent)
set(CMAKE_POLICY_DEFAULT_CMP0077 NEW)
set(openPMD_BUILD_CLI_TOOLS OFF)
set(openPMD_BUILD_EXAMPLES OFF)
set(openPMD_BUILD_TESTING OFF)
set(openPMD_BUILD_SHARED_LIBS OFF) # precedence over BUILD_SHARED_LIBS if needed
set(openPMD_INSTALL OFF) # or instead use:
# set(openPMD_INSTALL ${BUILD_SHARED_LIBS}) # only install if used as a shared library
set(openPMD_USE_PYTHON OFF)
FetchContent_Declare(openPMD
GIT_REPOSITORY "https://github.com/openPMD/openPMD-api.git"
GIT_TAG "0.15.2")
FetchContent_MakeAvailable(openPMD)
install(TARGETS openPMD EXPORT parthenonTargets)
endif()

# Kokkos recommendatation resulting in not using default GNU extensions
set(CMAKE_CXX_EXTENSIONS OFF)

Expand Down
8 changes: 8 additions & 0 deletions src/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -193,10 +193,14 @@ add_library(parthenon
outputs/parthenon_hdf5_types.hpp
outputs/parthenon_xdmf.cpp
outputs/parthenon_hdf5.hpp
outputs/parthenon_opmd.cpp
outputs/parthenon_opmd.hpp
outputs/parthenon_xdmf.hpp
outputs/restart.hpp
outputs/restart_hdf5.cpp
outputs/restart_hdf5.hpp
outputs/restart_opmd.cpp
outputs/restart_opmd.hpp
outputs/vtk.cpp

parthenon/driver.hpp
Expand Down Expand Up @@ -312,6 +316,10 @@ if (ENABLE_HDF5)
target_link_libraries(parthenon PUBLIC HDF5_C)
endif()

if (PARTHENON_ENABLE_OPENPMD)
target_link_libraries(parthenon PUBLIC openPMD::openPMD)
endif()

# For Cuda with NVCC (<11.2) and C++17 Kokkos currently does not work/compile with
# relaxed-constexpr, see https://github.com/kokkos/kokkos/issues/3496
# However, Parthenon heavily relies on it and there is no harm in compiling Kokkos
Expand Down
2 changes: 2 additions & 0 deletions src/config.hpp.in
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@
// defne ENABLE_HDF5 or not at all
#cmakedefine ENABLE_HDF5

#cmakedefine PARTHENON_ENABLE_OPENPMD

// define PARTHENON_DISABLE_HDF5_COMPRESSION or not at all
#cmakedefine PARTHENON_DISABLE_HDF5_COMPRESSION

Expand Down
6 changes: 6 additions & 0 deletions src/interface/params.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,12 @@ class Params {
return it->second;
}

const Mutability &GetMutability(const std::string &key) const {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

auto const it = myMutable_.find(key);
PARTHENON_REQUIRE_THROWS(it != myMutable_.end(), "Key " + key + " doesn't exist");
return it->second;
}

std::vector<std::string> GetKeys() const {
std::vector<std::string> keys;
for (auto &x : myParams_) {
Expand Down
70 changes: 70 additions & 0 deletions src/outputs/output_utils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
// the public, perform publicly and display publicly, and to permit others to do so.
//========================================================================================

#include <cstdint>
#include <map>
#include <set>
#include <string>
Expand All @@ -29,6 +30,7 @@
#include "mesh/mesh.hpp"
#include "mesh/meshblock.hpp"
#include "outputs/output_utils.hpp"
#include "utils/mpi_types.hpp"

namespace parthenon {
namespace OutputUtils {
Expand Down Expand Up @@ -241,6 +243,45 @@ std::vector<int> ComputeIDsAndFlags(Mesh *pm) {
});
}

template <typename T>
std::vector<T> FlattendedLocalToGlobal(Mesh *pm, const std::vector<T> &data_local) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what this function does. Is it actually doing an MPI all-to-all to build up a global data vector? Is this something we ever want to do?

const int n_blocks_global = pm->nbtotal;
const int n_blocks_local = static_cast<int>(pm->block_list.size());

const int n_elem = data_local.size() / n_blocks_local;
PARTHENON_REQUIRE_THROWS(data_local.size() % n_blocks_local == 0,
"Results from flattened input vector does not evenly divide "
"into number of local blocks.");
std::vector<T> data_global(n_elem * n_blocks_global);

std::vector<int> counts(Globals::nranks);
std::vector<int> offsets(Globals::nranks);

const auto &nblist = pm->GetNbList();
counts[0] = n_elem * nblist[0];
offsets[0] = 0;
for (int r = 1; r < Globals::nranks; r++) {
counts[r] = n_elem * nblist[r];
offsets[r] = offsets[r - 1] + counts[r - 1];
}

#ifdef MPI_PARALLEL
PARTHENON_MPI_CHECK(MPI_Allgatherv(data_local.data(), counts[Globals::my_rank],
MPITypeMap<T>::type(), data_global.data(),
counts.data(), offsets.data(), MPITypeMap<T>::type(),
MPI_COMM_WORLD));
#else
return data_local;
#endif
return data_global;
}

// explicit template instantiation
template std::vector<int64_t>
FlattendedLocalToGlobal(Mesh *pm, const std::vector<int64_t> &data_local);
template std::vector<int> FlattendedLocalToGlobal(Mesh *pm,
const std::vector<int> &data_local);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be templated if we only instantiated it for int64?


// TODO(JMM): I could make this use the other loop
// functionality/high-order functions. but it was more code than this
// for, I think, little benefit.
Expand Down Expand Up @@ -313,5 +354,34 @@ std::size_t MPISum(std::size_t val) {
return val;
}

VariableVector<Real> GetVarsToWrite(const std::shared_ptr<MeshBlock> pmb,
const bool restart,
const std::vector<std::string> &variables) {
const auto &var_vec = pmb->meshblock_data.Get()->GetVariableVector();
auto vars_to_write = GetAnyVariables(var_vec, variables);
if (restart) {
// get all vars with flag Independent OR restart
auto restart_vars = GetAnyVariables(
var_vec, {parthenon::Metadata::Independent, parthenon::Metadata::Restart});
for (auto restart_var : restart_vars) {
vars_to_write.emplace_back(restart_var);
}
}
return vars_to_write;
}

std::vector<VarInfo> GetAllVarsInfo(const VariableVector<Real> &vars,
const IndexShape &cellbounds) {
std::vector<VarInfo> all_vars_info;
for (auto &v : vars) {
all_vars_info.emplace_back(v, cellbounds);
}

// sort alphabetically
std::sort(all_vars_info.begin(), all_vars_info.end(),
[](const VarInfo &a, const VarInfo &b) { return a.label < b.label; });
return all_vars_info;
}

} // namespace OutputUtils
} // namespace parthenon
16 changes: 16 additions & 0 deletions src/outputs/output_utils.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -339,13 +339,29 @@ std::vector<Real> ComputeXminBlocks(Mesh *pm);
std::vector<int64_t> ComputeLocs(Mesh *pm);
std::vector<int> ComputeIDsAndFlags(Mesh *pm);

// Takes a vector containing flattened data of all rank local blocks and returns the
// flattened data over all blocks.
template <typename T>
std::vector<T> FlattendedLocalToGlobal(Mesh *pm, const std::vector<T> &data_local);

// TODO(JMM): Potentially unsafe if MPI_UNSIGNED_LONG_LONG isn't a size_t
// however I think it's probably safe to assume we'll be on systems
// where this is the case?
// TODO(JMM): If we ever need non-int need to generalize
std::size_t MPIPrefixSum(std::size_t local, std::size_t &tot_count);
std::size_t MPISum(std::size_t local);

// Return all variables to write, i.e., for restarts all indpendent variables and ones
// with explicit Restart flag, but also variables explicitly defined to output in the
// input file.
VariableVector<Real> GetVarsToWrite(const std::shared_ptr<MeshBlock> pmb,
const bool restart,
const std::vector<std::string> &variables);

// Returns a sorted vector of VarInfo associated with vars
std::vector<VarInfo> GetAllVarsInfo(const VariableVector<Real> &vars,
const IndexShape &cellbounds);
Comment on lines +360 to +369
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these two functions be unified with the HDF5 machinery? I actually thought I already wrote GetAllVarsInfo...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think we wrote them in parallel.


} // namespace OutputUtils
} // namespace parthenon

Expand Down
11 changes: 11 additions & 0 deletions src/outputs/outputs.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -259,6 +259,17 @@ Outputs::Outputs(Mesh *pm, ParameterInput *pin, SimTime *tm) {
pnew_type = new VTKOutput(op);
} else if (op.file_type == "ascent") {
pnew_type = new AscentOutput(op);
} else if (op.file_type == "openpmd") {
#ifdef PARTHENON_ENABLE_OPENPMD
pnew_type = new OpenPMDOutput(op);
#else
msg << "### FATAL ERROR in Outputs constructor" << std::endl
<< "Executable not configured for OpenPMD outputs, but OpenPMD file format "
<< "is requested in output/restart block '" << op.block_name << "'. "
<< "You can disable this block without deleting it by setting a dt < 0."
<< std::endl;
PARTHENON_FAIL(msg);
#endif // ifdef PARTHENON_ENABLE_OPENPMD
} else if (op.file_type == "histogram") {
#ifdef ENABLE_HDF5
pnew_type = new HistogramOutput(op, pin);
Expand Down
13 changes: 13 additions & 0 deletions src/outputs/outputs.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@
#include "interface/mesh_data.hpp"
#include "io_wrapper.hpp"
#include "kokkos_abstraction.hpp"
#include "openPMD/Iteration.hpp"
#include "parthenon_arrays.hpp"
#include "utils/error_checking.hpp"

Expand Down Expand Up @@ -204,6 +205,18 @@ class AscentOutput : public OutputType {
ParArray1D<Real> ghost_mask_;
};

//----------------------------------------------------------------------------------------
//! \class OpenPMDOutput
// \brief derived OutputType class for OpenPMD based output

class OpenPMDOutput : public OutputType {
public:
explicit OpenPMDOutput(const OutputParameters &oparams) : OutputType(oparams) {}
void WriteOutputFile(Mesh *pm, ParameterInput *pin, SimTime *tm,
const SignalHandler::OutputSignal signal) override;

};

#ifdef ENABLE_HDF5
//----------------------------------------------------------------------------------------
//! \class PHDF5Output
Expand Down
30 changes: 14 additions & 16 deletions src/outputs/parthenon_hdf5.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -357,22 +357,20 @@ void PHDF5Output::WriteOutputFileImpl(Mesh *pm, ParameterInput *pin, SimTime *tm
const auto &pmb = pm->block_list[b_idx];
bool is_allocated = false;

// for each variable that this local meshblock actually has
const auto vars = get_vars(pmb);
for (auto &v : vars) {
// For reference, if we update the logic here, there's also
// a similar block in parthenon_manager.cpp
if (v->IsAllocated() && (var_name == v->label())) {
auto v_h = v->data.GetHostMirrorAndCopy();
OutputUtils::PackOrUnpackVar(
vinfo, output_params.include_ghost_zones, index,
[&](auto index, int topo, int t, int u, int v, int k, int j, int i) {
tmpData[index] = static_cast<OutT>(v_h(topo, t, u, v, k, j, i));
});

is_allocated = true;
break;
}
// TODO(reviewers) Why was the loop originally there? Does the direct Get causes
// issue?
Comment on lines +363 to +364
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure---it may have just been crazy code that no one tried to change.

auto v = pmb->meshblock_data.Get()->GetVarPtr(var_name);
// For reference, if we update the logic here, there's also
// a similar block in parthenon_manager.cpp
if (v->IsAllocated() && (var_name == v->label())) {
auto v_h = v->data.GetHostMirrorAndCopy();
OutputUtils::PackOrUnpackVar(
vinfo, output_params.include_ghost_zones, index,
[&](auto index, int topo, int t, int u, int v, int k, int j, int i) {
tmpData[index] = static_cast<OutT>(v_h(topo, t, u, v, k, j, i));
});

is_allocated = true;
}

if (vinfo.is_sparse) {
Expand Down
Loading
Loading