DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects,
Chen Bao*, Helin Xu*, Yuzhe Qin, Xiaolong Wang, CVPR 2023.
DexArt is a novel benchmark and pipeline for learning multiple dexterous manipulation tasks. This repo contains the simulated environment and training code for DexArt.
- Clone the repo and Create a conda env with all the Python dependencies.
git clone [email protected]:Kami-code/dexart-release.git
cd dexart-release
conda create --name dexart python=3.8
conda activate dexart
pip install -e . # for simulation environment
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 -c pytorch # for visualizing trained policy and training
- Download the assets from
the Google Drive and place
the
asset
directory at the project root directory.
The file structure is listed as follows:
dexart/env/
: environments
assets/
: tasks annotations, object and robot URDFs
examples/
: example code to try DexArt
stable_baselines3/
: rl training code modified from stable_baselines3
python examples/random_action.py --task_name=laptop
task_name
: name of environment [faucet
, laptop
, bucket
, toilet
]
python examples/visualize_observation.py --task_name=laptop
task_name
: name of environment [faucet
, laptop
, bucket
, toilet
]
python examples/visualize_policy.py --task_name=laptop --checkpoint_path assets/rl_checkpoints/laptop.zip
task_name
: name of environment [faucet
, laptop
, bucket
, toilet
]
use_test_set
: flag to determine evaluating with seen or unseen instances
python examples/evaluate_policy.py --task_name=laptop --checkpoint_path assets/rl_checkpoints/laptop.zip --eval_per_instance 10
task_name
: name of environment [faucet
, laptop
, bucket
, toilet
]
use_test_set
: flag to determine evaluating with seen or unseen instances
python3 examples/train.py --n 100 --workers 10 --iter 5000 --lr 0.0001 &&
--seed 100 --bs 500 --task_name laptop --extractor_name smallpn &&
--pretrain_path ./assets/vision_pretrain/laptop_smallpn_fulldata.pth
n
: the number of rollouts to be collected in single episode
workers
: the number of simulation progress
iter
: the total episode number to be trained
lr
: learning rate of RL
seed
: seed of RL
bs
: batch size of RL update
task_name
: name of training environment [faucet
, laptop
, bucket
, toilet
]
extractor_name
: different PointNet architectures [smallpn
, meduimpn
, largepn
]
pretrain_path
: path to downloaded pretrained model. [Default: None
]
@inproceedings{
bao2023dexart,
title={DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects},
author={Chen Bao and Helin Xu and Yuzhe Qin and Xiaolong Wang},
booktitle={Conference on Computer Vision and Pattern Recognition 2023},
year={2023},
url={https://openreview.net/forum?id=v-KQONFyeKp}
}
This repository employs the same code structure for simulation environment and training code to that used in DexPoint.