Tao Yu, Zerong Zheng, Kaiwen Guo, Pengpeng Liu, Qionghai Dai, Yebin Liu. CVPR 2021 Oral
[Project Page] [Youtube Video]
We've fixed the misaligned SMPL-X fittings and provided the code for converting SMPL-X paras to mesh.
SMPL-X fitting results, including paras and meshes, can be downloaded directly at: SMPL-X Fittings
THuman2.1 is finished! We've significantly extended the dataset from 500 models to 2500 models.
-
The THUman2.0 dataset (the "Dataset") is available for non-commercial research purposes only. Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, as training data for a commercial product, for commercial ergonomic analysis (e.g. product design, architectural design, etc.), or production of other artifacts for commercial purposes including, for example, web services, movies, television programs, mobile applications, or video games. The dataset may not be used for pornographic purposes or to generate pornographic material whether commercial or not. The Dataset may not be reproduced, modified and/or made available in any form to any third party without Tsinghua University’s prior written permission.
-
You agree not to reproduce, modified, duplicate, copy, sell, trade, resell or exploit any portion of the images and any portion of derived data in any form to any third party without Tsinghua University’s prior written permission.
-
You agree not to further copy, publish or distribute any portion of the Dataset. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset.
-
Tsinghua University reserves the right to terminate your access to the Dataset at any time.
The dataset is encrypted to prevent unauthorized access.
Please fill out the request form and send it to Yebin Liu ([email protected]) and cc Tao Yu ([email protected]) to request the download link.
By requesting the link, you acknowledge that you have read the agreement, understand it, and agree to be bound by it. If you do not agree with these terms and conditions, you must not download and use the Dataset.
THuman2.0 Dataset contains 500 high-quality human scans captured by a dense DLSR rig. For each scan, we provide the 3D model (****.obj) and the corresponding texture map (material0.jpeg).
Update: We have released SMPL-X fitting parameters and corresponding meshes, you can download them from Here.
Update: The SMPL fitting results can be downloaded Here, many thanks Dr. Jinlong Yang and Dr. Xu Chen for the efforts.
Note: When generating the mesh of SMPL-X, please first scale the generated SMPL-X mesh (using the original SMPL-X code) and then translate the model using global transformation in the pkl, (e.g., SMPLX_Verts * scale + trans).
Please check ICON/docs/dataset.md to render RGB/Normal/Depth for both scans and SMPL-X fits, and ICON/docs/training.md to visualize them with vedo GUI interactively.
Please consider cite ICON if it also helps on your project
@inproceedings{xiu2022icon,
title = {{ICON}: {I}mplicit {C}lothed humans {O}btained from {N}ormals},
author = {Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J.},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {13296-13306}
}
RGB Image | Normal(Front) | Normal(Back) | Normal(SMPL-X, Front) | Normal(SMPL-X, Back) |
Visibility | Depth(Front) | Depth(Back) | Depth(SMPL-X, Front) | Depth(SMPL-X, Back) |
Related Datasets from THU3DV Lab [Link]
[MultiHuman Dataset] Containing 453 high-quality scans, each contains 1-3 persons. The dataset can be used to train and evaluate multi-person reconstruction algorithms.
[THuman Dataset] Containing 6k (200 subjects x 30 poses) scans from a single RGBD sensor, with SMPL annotations.
[THuman 3.0 Dataset] Containing 20 combinations of human-garment, each contains 15 to 35 high-quality human scans captured by a dense DLSR rig.
If you use this dataset for your research, please consider citing:
@InProceedings{tao2021function4d,
title={Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer RGBD Sensors},
author={Yu, Tao and Zheng, Zerong and Guo, Kaiwen and Liu, Pengpeng and Dai, Qionghai and Liu, Yebin},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR2021)},
month={June},
year={2021},
}
- Tao Yu ([email protected])
- Yebin Liu ([email protected])