CVPR, 2023
Shaowei Liu
·
Saurabh Gupta*
·
Shenlong Wang*
·
This repository contains a pytorch implementation for the paper: Building Rearticulable Models for Arbitrary 3D Objects from 4D Point Clouds. In this paper, we build animatable 3D models from arbitrary articulated object point cloud sequence.
- Clone this repository:
git clone https://github.com/stevenlsw/reart cd reart
- Install requirements in a virtual environment:
sh setup_env.sh
The code is tested on Python 3.6.13
and Pytorch 1.10.2+cu113
.
Run our Colab notebook for quick start!
demo_data
folder contains data and pretrained model of Nao robot. We provide two pretrained models, base-2
is the relaxation model and kinematic-2
is the projection model. Postfix 2
is the canonical frame index. Canonical frame index is selected by the lowest energy.
Canonical frame index cano_idx
should be consistent with postfix in pretrained model name.
-
projection model
python run_robot.py --seq_path=demo_data/data/nao --save_root=exp --cano_idx=2 --evaluate --resume=demo_data/pretrained/nao/kinematic-2/model.pth.tar --model=kinematic
-
relaxation model
python run_robot.py --seq_path=demo_data/data/nao --save_root=exp --cano_idx=2 --evaluate --resume=demo_data/pretrained/nao/base-2/model.pth.tar --model=base
After running the command, results are stored in ${save_root}/${robot name}
. input.gif
visualize the input sequence, recon.gif
visualize the reconstruction,
gt.gif
visualize the GT. seg.html
visualize the pred segmentation, structure.html
visualize the infered topology. result.txt
contains the evaluation result.
Input | Recon | GT |
---|---|---|
Download the data from here and save as data
folder.
data
├── robot
│ └── nao - robot name
│ └── ...
├── category_normalize_scale.pkl - center and scale of each category
├── real
│ └── toy - real scan object
│ └── switch
Download pretrained models from here and save as pretrained
folder.
pretrained
├── robot
│ └── nao - robot name
│ └── base-{cano_idx} - pretrained relaxation model
│ └── kinematic-{cano_idx} - pretrained projection model
├── real
├── corr_model.pth.tar - pretrained correspondence model
Take nao
as an example.
corr_model.pth.tar
is needed for training. Recommend set cano_idx
same as our release pretrained model to get the reported performance for each category.
python run_robot.py --seq_path=data/robot/nao --save_root=exp --cano_idx=2 --use_flow_loss --use_nproc --use_assign_loss --downsample 4 --n_iter=15000
The relaxation results are stored at ${save_root}/${robot name}/result.pkl
and needed for training projection model.
Set the relaxation result base_result_path
as above.
python run_robot.py --seq_path=data/robot/nao --save_root=exp --cano_idx=2 --use_flow_loss --use_nproc --use_assign_loss --model=kinematic --base_result_path=exp/nao/result.pkl --assign_iter=0 --downsample=2 --assign_gap=1 --snapshot_gap=10
python run_robot.py --seq_path=data/robot/nao --save_root=exp --cano_idx=2 --evaluate --resume=pretrained/robot/nao/kinematic-2/model.pth.tar --model=kinematic
See all robots and pretrained models in pretrained/robot
, Take spot
as another example, you could get
Input | Recon | GT |
---|---|---|
Follow instructions similar to robot. Take toy
as an example.
python run_real.py --seq_path=data/real/toy --evaluate --model=kinematic --save_root=exp --cano_idx=0 --resume=pretrained/real/toy/kinematic-0/model.pth.tar
python run_real.py --seq_path=data/real/toy --save_root=exp --cano_idx=0 --use_flow_loss --use_nproc --use_assign_loss --assign_iter=1000
python run_real.py --seq_path=data/real/toy --cano_idx=0 --save_root=exp --n_iter=200 --use_flow_loss --use_nproc --use_assign_loss --model=kinematic --assign_iter=0 --assign_gap=1 --snapshot_gap=10 --base_result_path=exp/toy/result.pkl
We provide real-scan toy
and switch
from Polycam
app in iPhone. Take toy
as an example, you could get
Input | Recon |
---|---|
-
Data
Follow multibody-sync, download
mbs_sapien.zip
and unzip it asmbs_sapien
and put underdata
folder. -
Model
We use the pretrained flow model from multibody-sync in our method for fair comparison. First clone the repo as
msync
.git clone https://github.com/huangjh-pub/multibody-sync.git msync
Follow multibody-sync instruction, download the trained weights and extract the weights to
msync/ckpt/articulated-full/best.pth.tar
.
Specify sapien_idx
to select different sapien objects, all experiments use canonical frame 0 cano_idx=0
.
python run_sapien.py --sapien_idx=212 --save_root=exp --n_iter=2000 --cano_idx=0 --use_flow_loss --use_nproc --use_assign_loss
The relaxation results are stored at ${save_root}/sapien_{sapien_idx}/result.pkl
and needed for training projection model.
Set the relaxation result base_result_path
as above.
python run_sapien.py --sapien_idx=212 --save_root=exp --n_iter=200 --cano_idx=0 --model=kinematic --use_flow_loss --use_nproc --use_assign_loss --assign_iter=0 --assign_gap=1 --snapshot_gap=10 --base_result_path=exp/sapien_212/result.pkl
After training, results are stored in ${save_root}/sapien_{sapien_idx}/
. result.txt
contains the evaluation result.
Take sapien_idx=212
as an example, you could get
Input | Recon | GT |
---|---|---|
If you find our work useful in your research, please cite:
@inproceedings{liu2023building,
title={Building Rearticulable Models for Arbitrary 3D Objects from 4D Point Clouds},
author={Liu, Shaowei and Gupta, Saurabh and Wang, Shenlong},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21138--21147},
year={2023}
}
We thank:
- Watch It Move for MST implementation
- multibody-sync for Sapien dataset setup
- APTED for tree edit distance measure
- KNN_CUDA for KNN with CUDA support