You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It supports saving out to HDF5, but using a very general purpose recursive method that sacrifices the documentation of the file format for ease of use on the implementation.
What we would like is to explicitly list out the main fields and associated metadata that we need to serialize. This should also help with documenting the specifics of the file format (shapes, dtypes, names) which makes it more straightforward to create a contract with downstream applications that use the data that this tool produces.
For example, an organization of the HDF5 file could look like:
But this format trades off generalizability for being more self-descriptive.
Whether we keep qpos in its flattened representation (useful for pipelining) or break it up into better described sub-elements (useful for portability and use outside of our pipelines) is a key decision point (though not mutually exclusive).
No matter what, we should also have a version key in the HDF5 file that can be used to route logic if this format evolves.
As a separate concern, we should also consider embedding the more useful values captured in this data structure that right now we compute on the fly after loading downstream via forward kinematics (see this module):
@struct.dataclass
class ReferenceClip:
"""This dataclass is used to store the trajectory in the env."""
# qpos
position: jp.ndarray = None
quaternion: jp.ndarray = None
joints: jp.ndarray = None
# xpos
body_positions: jp.ndarray = None
# velocity (inferred)
velocity: jp.ndarray = None
joints_velocity: jp.ndarray = None
angular_velocity: jp.ndarray = None
# xquat
body_quaternions: jp.ndarray = None
The text was updated successfully, but these errors were encountered:
Currently, we are outputting the results of stac-mjx to Pickle files by default:
stac-mjx/stac_mjx/io.py
Lines 192 to 211 in f3980e4
It supports saving out to HDF5, but using a very general purpose recursive method that sacrifices the documentation of the file format for ease of use on the implementation.
What we would like is to explicitly list out the main fields and associated metadata that we need to serialize. This should also help with documenting the specifics of the file format (shapes, dtypes, names) which makes it more straightforward to create a contract with downstream applications that use the data that this tool produces.
For example, an organization of the HDF5 file could look like:
It would probably be more portable and self-describing to break up
qpos
into its constituent elements, e.g.:But this format trades off generalizability for being more self-descriptive.
Whether we keep
qpos
in its flattened representation (useful for pipelining) or break it up into better described sub-elements (useful for portability and use outside of our pipelines) is a key decision point (though not mutually exclusive).No matter what, we should also have a
version
key in the HDF5 file that can be used to route logic if this format evolves.As a separate concern, we should also consider embedding the more useful values captured in this data structure that right now we compute on the fly after loading downstream via forward kinematics (see this module):
The text was updated successfully, but these errors were encountered: