Skip to content

ShirleyMaxx/VMarker-Pro

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VMarker-Pro: Probabilistic 3D Human Mesh Estimation from Virtual Markers

Python 3.8+ PyTorch License

Introduction

This is the offical Pytorch implementation of our paper:

VMarker-Pro: Probabilistic 3D Human Mesh Estimation from Virtual Markers

It is an extension of VMarker (CVPR 2023) which supports probabilistic 3D human mesh estimation. Below is the overall VMarker-Pro framework.

Notes 🎯

This repository provides guidelines for VMarker-Pro exclusively. It is backward compatible and supports all VMarker commands. For instructions on training and inference with VMarker, please refer to VMarker. The only difference is that this repo supports PyTorch's DDP training.

Installation

  1. Install dependencies. This project is developed using >= python 3.8 on Ubuntu 16.04. NVIDIA GPUs are needed. We recommend you use an Anaconda virtual environment.
  # 1. Create a conda virtual environment.
  conda create -n pytorch python=3.8 -y
  conda activate pytorch

  # 2. Install PyTorch >= v1.13.0 following [official instruction](https://pytorch.org/). Please adapt the CUDA version to yours.
  pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117


  # 3. Pull our code.
  git clone https://github.com/ShirleyMaxx/VMarker-Pro.git
  cd VMarker-Pro

  # 4. Install other packages. This project doesn't have any special or difficult-to-install dependencies.
  sh requirements.sh

  #5. Install VMarker-Pro
  python setup.py develop
  1. Prepare SMPL layer. We use smplx.

    1. Install smplx package by pip install smplx. Already done in the first step.
    2. Download basicModel_f_lbs_10_207_0_v1.0.0.pkl, basicModel_m_lbs_10_207_0_v1.0.0.pkl, and basicModel_neutral_lbs_10_207_0_v1.0.0.pkl from here (female & male) and here (neutral) to ${Project}/data/smpl. Please rename them as SMPL_FEMALE.pkl, SMPL_MALE.pkl, and SMPL_NEUTRAL.pkl, respectively.
    3. Download others SMPL-related from Google drive or Onedrive and put them to ${Project}/data/smpl.
  2. Download data following the Data section. In summary, your directory tree should be like this

  ${Project}
  ├── assets
  ├── command
  ├── configs
  ├── data 
  ├── demo 
  ├── experiment 
  ├── inputs 
  ├── vmpro 
  ├── main 
  ├── models 
  ├── README.md
  ├── setup.py
  `── requirements.sh
  • assets contains the body virtual markers in npz format. Feel free to use them.
  • command contains the running scripts.
  • configs contains the configurations in yml format.
  • data contains soft links to images and annotations directories.
  • vmpro contains kernel codes for our method.
  • main contains high-level codes for training or testing the network.
  • models contains pre-trained weights. Download from Google drive or Onedrive.
  • *experiment will be automatically made after running the code, it contains the outputs, including trained model weights, test metrics, and visualized outputs.

Quick demo ⭐

  1. Installation. Make sure you have finished the above installation successfully. VMarker-Pro does not detect person and only estimates relative pose and mesh, therefore please also install VirtualPose following its instructions. VirtualPose will detect all the person and estimate their root depths. Download its model weight from Google drive or Onedrive and put it under VirtualPose.
git clone https://github.com/wkom/VirtualPose.git
cd VirtualPose
python setup.py develop
  1. Render Env. If you run this code in ssh environment without display device, please do follow:
1. Install osmesa follow https://pyrender.readthedocs.io/en/latest/install/
2. Reinstall the specific pyopengl fork: https://github.com/mmatl/pyopengl
3. Set opengl's backend to osmesa via os.environ["PYOPENGL_PLATFORM"] = "osmesa"
  1. Model weight. Download the pre-trained VMarker-Pro models baseline from Onedrive. Put the weight below experiment folder and follow the directory structure. Specify the load weight path by test.weight_path in configs/diff3dmesh_infer/baseline.yml.

  2. Input image/video. Prepare input.jpg or input.mp4 and put it in inputs folder. Both image and video input are supported. Specify the input path and type by arguments.

  3. RUN. You can check the output at experiment/diff3dmesh_infer/exp_*/vis. By default, we output only one solution. If multiple hypotheses are needed, please set cfg.test.multi_n accordingly.

sh command/diff3dmesh_infer/baseline.sh

Train & Eval

Data

Please follow VMarker data section to prepare the data directory.

Train

Every experiment is defined by config files. Configs of the experiments in the paper can be found in the ./configs directory. You can use the scripts under command to run.

To train the model, simply run the script below. Specific configurations can be modified in the corresponding configs/diff3dmesh_train/baseline.yml file. Default setting is using 2 GPUs (80G A100). Multi-GPU training is implemented with PyTorch's DDP. Results can be seen in experiment directory or in the tensorboard.

We conduct mix-training on H3.6M and 3DPW datasets. To get the reported results on 3DPW dataset, please first run train_h36m.sh and then load the final weight to train on 3DPW or SURREAL by running train_pw3d.sh or train_surreal.sh.

sh command/diff3dmesh_train/train_h36m.sh
sh command/diff3dmesh_train/train_pw3d.sh
sh command/diff3dmesh_train/train_surreal.sh

Evaluation

To evaluate the model, specify the model path test.weight_path in configs/diff3dmesh_test/baseline_*.yml. Argument --mode test should be set. Results can be seen in experiment directory or in the tensorboard.

sh command/diff3dmesh_test/test_h36m.sh
sh command/diff3dmesh_test/test_pw3d.sh
sh command/diff3dmesh_test/test_surreal.sh

Model Zoo

Download all model weights from Onedrive and put them under experiment folder.

Acknowledgement

This repo is built on the excellent work Pose2Mesh, HybrIK, CLIFF and LDM. Thanks for these great projects.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published