[Project Page] [Paper] [Supp. Mat.]
Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the SMPL-X/SMPLify-X model, data and software, (the "Model & Software"), including 3D meshes, blend weights, blend shapes, textures, software, scripts, and animations. By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.
This repository contains the fitting code used for the experiments in Expressive Body Capture: 3D Hands, Face, and Body from a Single Image.
Run the following command to execute the code:
python smplifyx/main.py --config cfg_files/fit_smplx.yaml
--data_folder DATA_FOLDER
--output_folder OUTPUT_FOLDER
--visualize="True/False"
--model_folder MODEL_FOLDER
--vposer_ckpt VPOSER_FOLDER
--part_segm_fn smplx_parts_segm.pkl
where the DATA_FOLDER
should contain two subfolders, images, where the
images are located, and keypoints, where the OpenPose output should be
stored.
To fit SMPL or SMPL+H, replace the yaml configuration file with either fit_smpl.yaml or fit_smplx.yaml, i.e.:
- for SMPL:
python smplifyx/main.py --config cfg_files/fit_smpl.yaml
--data_folder DATA_FOLDER
--output_folder OUTPUT_FOLDER
--visualize="True/False"
--model_folder MODEL_FOLDER
--vposer_ckpt VPOSER_FOLDER
- for SMPL+H:
python smplifyx/main.py --config cfg_files/fit_smplh.yaml
--data_folder DATA_FOLDER
--output_folder OUTPUT_FOLDER
--visualize="True/False"
--model_folder MODEL_FOLDER
--vposer_ckpt VPOSER_FOLDER
To visualize the results produced by the method you can run the following script:
python smplifyx/render_results.py --mesh_fns OUTPUT_MESH_FOLDER
where OUTPUT_MESH_FOLDER is the folder that contains the resulting meshes.
Follow the installation instructions for each of the following before using the fitting code.
- PyTorch Mesh self-intersection for interpenetration penalty
- Download the per-triangle part segmentation here
- Trimesh for loading triangular meshes
- Pyrender for visualization
The code has been tested with Python 3.6, CUDA 10, CuDNN 7.3 and PyTorch 1.0 on Ubuntu 18.04.
If you find this Model & Software useful in your research we would kindly ask you to cite:
@inproceedings{SMPL-X:2019,
title = {Expressive Body Capture: 3D Hands, Face, and Body from a Single Image},
author = {Pavlakos, Georgios and Choutas, Vasileios and Ghorbani, Nima and Bolkart, Timo and Osman, Ahmed A. A. and Tzionas, Dimitrios and Black, Michael J.},
booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year = {2019}
}
The LBFGS optimizer with Strong Wolfe Line search is taken from this Pytorch pull request. Special thanks to Du Phan for implementing this. We will update the repository once the pull request is merged.
The code of this repository was implemented by Vassilis Choutas and Georgios Pavlakos.
For questions, please contact [email protected].
For commercial licensing (and all related questions for business applications), please contact [email protected].