-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Writing and Reading Non-openPMD Infos #15
Comments
The simple case is already possible, maybe not made explicit enough yet. Every "position" in an openPMD hierearchy (which in this APIs terms is an Here's an example demonstrating said feature by annotating the root group with a std::string. |
That's already a great start, thx! :) The latter case is quite relevant for PIConGPU checkpoints. |
cc @ejcjason you also need this, right? |
I cannot access the link "here", but I guess the suggested solution is to change the attribute of "position" under one particle group into something we want. That's good. But what if we want to add one group, say "observable", in the iteration group (i.e., in the same level of particles)?
It seems that the above solution doesn't work in this case, because there is no command like |
Links updated with permanently accessible examples. Yes, what you want is to side-channel to the low-level write API to write groups and datasets outside of openPMD. That's also what I meant.
Or are you already good if you can change the |
Runs into timeout for unclear reasons with this patch: ``` 15/32 Test openPMD#15: MPI.8_benchmark_parallel ...............***Timeout 1500.17 sec ```
Runs into timeout for unclear reasons with this patch: ``` 15/32 Test openPMD#15: MPI.8_benchmark_parallel ...............***Timeout 1500.17 sec ```
Runs into timeout for unclear reasons with this patch: ``` 15/32 Test openPMD#15: MPI.8_benchmark_parallel ...............***Timeout 1500.17 sec ```
* HDF5: Empiric for Optimal Chunk Size This ports a prior empirical algorithm from libSplash to determine an optimal (large) chunk size for an HDF5 dataset based on its datatype and global extent. Original implementation by Felix Schmitt @f-schmitt (ZIH, TU Dresden) in [libSplash](https://github.com/ComputationalRadiationPhysics/libSplash). Original source: - https://github.com/ComputationalRadiationPhysics/libSplash/blob/v1.7.0/src/DCDataSet.cpp - https://github.com/ComputationalRadiationPhysics/libSplash/blob/v1.7.0/src/include/splash/core/DCHelper.hpp Co-authored-by: Felix Schmitt <[email protected]> * Add scaffolding for JSON options in HDF5 * HDF5: Finish Chunking JSON/Env control * HiPACE (legacy) pipeline: no chunking The parallel, independent I/O pattern here is corner-case for what HDF5 can support, due to non-collective declarations of data sets. Testing shows that it does not work with chunking. * CI: no HDF5 Chunking with Sanitizer Runs into timeout for unclear reasons with this patch: ``` 15/32 Test #15: MPI.8_benchmark_parallel ...............***Timeout 1500.17 sec ``` * Apply suggestions from code review Co-authored-by: Franz Pöschel <[email protected]> Co-authored-by: Felix Schmitt <[email protected]> Co-authored-by: Franz Pöschel <[email protected]>
What the current status of supporting reading/writing other group data, @ax3l ? |
There is an openPMD-standard PR here openPMD/openPMD-standard#282 and an openPMD-api PR here #1432 The openPMD-api PR can already be used, but there is a number of things that are still up to discussion. If you want to try it out, you can have a look at the Python example or the C++ tests in the Diff of that PR. |
Thanks for the pointer. |
This is part of a delivery scheduled for this autumn, but I hope to merge the openPMD-api pull request sooner than that. It is a relatively big change and the API might still be changed / is subject to discussion, so long story short is that there is no stable support for this yet, but there is a timeline. There are logically two parts to this project:
The workaround for now would probably be to try if you can "pretend" your custom data is a mesh (we do something similar for checkpointing data in PIConGPU at the moment). |
It would be necessary to allow storing non-openPMD information as well.
In the simple case, this means adding additional, user-defined attributes to records, record-components and paths.
In the more complex case, application specific states such as RNG states, unqiue-number-generator states, etc. need to be stored as records outside of the openPMD-interpreted paths.
The text was updated successfully, but these errors were encountered: