This code was tested with Python 3.8 and gcc 9.4.0 on Ubuntu 20.04. The repository comes with all the features of the RaiSim physics simulation, as ArtiGrasp is integrated into RaiSim.
The ArtiGrasp related code can be found in the raisimGymTorch subfolder. There are six environments (see envs). compose_eval is for the quantitative evaluation of the Dynamic Object Grasping and Articulation task. fixed_arti_evaluation is for the quantitative evaluation of grasping and articulation with fixed object base. floating_evaluation is for the quantitative evaluation of grasping and articulation with free object base. general_two, left_fixed and multi_obj_arti are the training environments for two hand cooperation with free object base, left hand policy with fixed object base, right hand policy with fixed object base respectively.
For good practice for Python package management, it is recommended to use virtual environments (e.g., virtualenv
or conda
) to ensure packages from different projects do not interfere with each other.
For installation, see and follow our documentation of the installation steps under docs/INSTALLATION.md. Note that you need to get a valid, free license for the RaiSim physics simulation and an activation key via this link.
We provide some pre-trained models to view the output of our method. They are stored in this folder.
-
For interactive visualizations, you need to run
raisimUnity/linux/raisimUnity.x86_64
and check the Auto-connect option.
-
To randomly choose an object and visualize the generated sequences, run
python raisimGymTorch/env/envs/general_two/runner_eval.py
-
For the pre-training phase of the right hand policy with fixed-base objects, run
python raisimGymTorch/env/envs/multi_obj_arti/runner.py -re (Load the checkpoint. Otherwise start from scratch)
-
For the pre-training phase of the left hand policy with fixed-base objects, run
python raisimGymTorch/env/envs/left_fixed/runner.py -re (Load the checkpoint. Otherwise start from scratch)
-
For the fine-tuning phase of two hand policies with free-base objects for cooperation, run
python raisimGymTorch/env/envs/general_two/runner.py -re (Load the checkpoint. Otherwise start from scratch)
-
For grasping and articulation of free-base objects, run
python raisimGymTorch/env/envs/floating_evaluation/runner_eval.py -obj '<obj_name>' -test (otherwise for training set) -grasp (otherwise for articulation)
-
For articulation of fixed-base objects, run
python raisimGymTorch/env/envs/fixed_arti_evaluation/runner_eval.py -obj '<obj_name>' -test (otherwise for training set)
-
For Dynamic Object Grasping and Articulation task (except ketchup), run
python raisimGymTorch/env/envs/composed_eval/compose_eval.py -obj '<obj_name>' -test (otherwise for training set)
-
For Dynamic Object Grasping and Articulation task (ketchup), run
python raisimGymTorch/env/envs/composed_eval/ketchup_eval.py -obj '<obj_name>' -test (otherwise for training set)
If you have error like
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (3,) + inhomogeneous part.
try
pip install numpy==1.23.1
If you have error like
"dump()" has been removed, use
yaml = YAML(typ='unsafe', pure=True)
yaml.dump(...)
try
pip install ruamel.yaml==0.17.16
To cite us, please use the following:
@inProceedings{zhang2024artigrasp,
title={{ArtiGrasp}: Physically Plausible Synthesis of Bi-Manual Dexterous Grasping and Articulation},
author={Zhang, Hui and Christen, Sammy and Fan, Zicong and Zheng, Luocheng and Hwangbo, Jemin and Song, Jie and Hilliges, Otmar},
booktitle={International Conference on 3D Vision (3DV)},
year={2024}
}
See the following license.