Note: For a complete 3D face geometry estimation and rendering solution with documentation, see pix2face, which contains this repository as a submodule.
-
Training Data
You will need three sets of training images: Input, PNCC, and offsets.
-
Input: The input RGB face image.
-
PNCC: "Projected Normalized Coordinate Code", as described in [1]
-
Offsets: 3D offsets from the "mean face" position to the observed 3D position.
-
[1] X. Zhu, Z. Lei, X. Liu, H. Shi, and S. Z. Li, “Face Alignment Across Large Poses: A 3D Solution”, CVPR 2016.
python train.py --input_dir $INPUT_DIR --PNCC_dir $PNCC_DIR --offsets_dir $OFFSETS_DIR \
--val_input_dir $VAL_INPUT_DIR --val_PNCC_dir $VAL_PNCC_DIR --val_offsets_dir $VAL_OFFSETS_DIR \
--output_dir $OUTPUT_DIR
python test.py --model $OUTPUT_DIR/pix2face_unet.pth \
--input <image_or_directory> --output_dir <output_dir>
See demo.py for an example of a transformation from image --> PNCC + offsets --> 3D Point Cloud.
In order to run the demo, you will need to train the network or download a pre-trained model.
If you find this software useful, please consider referencing:
@INPROCEEDINGS{pix2face2017,
author = {Daniel Crispell and Maxim Bazik},
booktitle = {2017 IEEE International Conference on Computer Vision Workshop (ICCVW)},
title = {Pix2Face: Direct 3D Face Model Estimation},
year = {2017},
pages = {2512-2518},
ISSN = {2473-9944},
month={Oct.}
}
Daniel Crispell [email protected]