v1.2.1
Gymnasium-Robotics 1.2.1 Release Notes:
This minor release adds new Multi-agent environments from the MaMuJoCo project. These environments have been updated to follow the PettingZoo API and use the latest mujoco bindings. In addition, the updates made for the first release of FrankaKitchen-v1
environment have been reverted in order for the environment to resemble more its original version in relay-policy-learning and D4RL. This will solve existing confusion with the action space (#135) and facilitate the re-creation of datasets in Minari.
We are also pining the mujoco version to v2.3.3
until we address the following issue (google-deepmind/mujoco#833).
Breaking Changes
- Revert
FrankaKitchen-v1
environment to original. @rodrigodelazcano in #145. These changes involve:- robot model: use the Franka robot model of the original environment instead of the model provided in mujoco_menagerie
- action space: remove the Inverse Kinematic control option and maintain a single action space, the original joint velocity control.
- goal tasks: some task have been removed which were not present in the original environment (top_right_burner and bottom_right_burner). Also the tasks name now match the original naming.
New Features
- Add MaMuJoCo (Multi-agent mujoco) environments by @Kallinteris-Andreas in #53. Documentation has been also included at https://robotics.farama.org/envs/MaMuJoCo/. NOTE: we are currently in the process of validating this environments #141
- Initialize
PointMaze
andAntMaze
environments with random goal and reset position by default . @rodrigodelazcano in #110, #114 - Add
success
key to infos return dictionary in allMaze
environments. @rodrigodelazcano in #110 - Recover the
set_env_state(state_dict={})
method of the Adroit hand environments from https://github.com/vikashplus/mj_envs . The initial state of the simulation can also be set by passing the dictionary argumentinitial_state_dict
when callingenv.reset(options={'initial_state_dict': Dict)
. @rodrigodelazcano in #119, @rodrigodelazcano in #115 - Resparsify adroit hand envs by @jjshoots in #111
Bug Fixes
- Add missing underscore to fix rendering by @frankroeder in #102
- Correct
point_obs
slicing forachieved_goal
inPointMaze
environments by @dohmjan in #105 - Update the position of the goal every reset in
AntMaze
environment by @nicehiro in #106 - Correct
FetchReach
environment versioning fromv3
tov2
by @aalmuzairee in #121 - Fix issue #128 . Use
jnt_doafdr
instead ofnt_qposadr
for themujoco_utils.get_joint_qvel()
utility function. by @rodrigodelazcano in #129 - Correct x, y scaling for Maze environments @rodrigodelazcano in #110
- Fix door state space key by @rodrigodelazcano in #130
- Make getter functions for qpos / qvel return copies by @hueds in #136
Minor Changes
- Enable
pyright.reportOptionalMemberAccess
by @Kallinteris-Andreas in #93 - Add Farama Notifications by @jjshoots in #120
Documentation
- Fix observation space table in
FetchSlide
docs. @rodrigodelazcano in #109 - Update docs/README.md to link to a new CONTRIBUTING.md for docs by @mgoulao in #117
- Add docs versioning and release notes by @mgoulao in #124
- Fix missing edit button by @mgoulao in #138
- Add missing docs requirement by @mgoulao in #125
- Add sparse reward variant for
AdroitHand
environments by @jjshoots in #123
Full Changelog: v1.2.0...v1.2.1