This is a github repository of a Visio-tactile Implicit Representations of Deformable Objects (ICRA 2022). Codes are based on siren and pointnet repositories.
Reconstruction & latent space composition
inference using partial pointcloud
conda create -n virdo python=3.8
conda activate virdo
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0 -c pytorch
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub
conda install pytorch3d=0.4.0 -c pytorch3d
pip install -r requirements.txt
pip install --ignore-installed open3d
After installation, resource or reopen the terminal.
Make sure to install wget $ apt-get install wget
and unzip $ apt-get install unzip
source download.sh
download_dataset
download_pretrained
Alternatively, you can manually download the datasets and pretrained models from here. Then put the files as below:
── VIRDO
│ ├── data
│ │ │── virdo_simul_dataset.pickle
│ ├── pretrained_model
│ │ │── force_final.pth
│ │ │── object_final.pth
│ │ │── deform_final.pth
python pretrain.py --config config/virdo.yaml --gpu_id 0
If you want to check the result of your pretrained model,
python pretrain.py --config config/virdo.yaml --gpu_id 0 --from_pretrained logs/pretrain/checkpoints/shape_latest.pth
then you will see the nominal reconstructions in logs/pretrain/ply directory.
python train.py --config config/virdo.yaml --gpu_id 0 --pretrain_path logs/pretrain/checkpoints/shape_latest.pth