-
video demo has added, You need to do these things:
- use the new Dockerfile to build image
- python lib version all specified
- run and within the container
- docker run --gpus all -v DensePose/DensePoseData:/denseposedata -v DensePose/tools:/densepose/tools_kk -it densepose:c2-cuda9-cudnn7 bash
- replace the local DensePoseData directory with the host one
- mv /densepose/DensePoseData /densepose/DensePoseDataLocal
- ln -s /denseposedata DensePoseData
- copy tools_kk/vis_kk.py to detectron/utils(you can also do it in infer_vid.py when running demo)
- install ffmpeg(you can also do it in dockerfile when building the image)
- save container to new image
- docker commit $(docker ps --last 1 -q) densepose:c2-cuda9-cudnn7-kk
- use the new Dockerfile to build image
-
video demo command:
- docker run --rm --gpus all -v /home/kevin/aj/DensePose/DensePoseData:/denseposedata -v /home/kevin/aj/DensePose/tools:/densepose/tools_kk -it densepose:c2-cuda9-cudnn7-kk python2 tools_kk/infer_vid.py --cfg configs/DensePose_ResNet101_FPN_s1x-e2e.yaml --output-dir DensePoseData/infer_out/ --wts ./DensePoseData/DensePose_ResNet101_FPN_s1x-e2e.pkl --input-file tools_kk/video.mp4
-
image demo command:
- docker run --rm --gpus all -v /home/kevin/aj/DensePose/DensePoseData:/denseposedata -v /home/kevin/aj/DensePose/tools:/densepose/tools_kk -it densepose:c2-cuda9-cudnn7-kk python2 tools_kk/infer_simple.py --cfg configs/DensePose_ResNet101_FPN_s1x-e2e.yaml --output-dir DensePoseData/infer_out/ --image-ext jpg --wts ./DensePoseData/DensePose_ResNet101_FPN_s1x-e2e.pkl DensePoseData/demo_data/grc.jpg
-
reference
Dense Human Pose Estimation In The Wild
Rıza Alp Güler, Natalia Neverova, Iasonas Kokkinos
[densepose.org
] [arXiv
] [BibTeX
]
Dense human pose estimation aims at mapping all human pixels of an RGB image to the 3D surface of the human body. DensePose-RCNN is implemented in the Detectron framework and is powered by Caffe2.
In this repository, we provide the code to train and evaluate DensePose-RCNN. We also provide notebooks to visualize the collected DensePose-COCO dataset and show the correspondences to the SMPL model.
Please find installation instructions for Caffe2 and DensePose in INSTALL.md
, a document based on the Detectron installation instructions.
After installation, please see GETTING_STARTED.md
for examples of inference and training and testing.
See notebooks/DensePose-COCO-Visualize.ipynb
to visualize the DensePose-COCO annotations on the images:
See notebooks/DensePose-COCO-on-SMPL.ipynb
to localize the DensePose-COCO annotations on the 3D template (SMPL
) model:
See notebooks/DensePose-RCNN-Visualize-Results.ipynb
to visualize the inferred DensePose-RCNN Results.
See notebooks/DensePose-RCNN-Texture-Transfer.ipynb
to localize the DensePose-COCO annotations on the 3D template (SMPL
) model:
This source code is licensed under the license found in the LICENSE
file in the root directory of this source tree.
If you use Densepose, please use the following BibTeX entry.
@InProceedings{Guler2018DensePose,
title={DensePose: Dense Human Pose Estimation In The Wild},
author={R\{i}za Alp G\"uler, Natalia Neverova, Iasonas Kokkinos},
journal={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2018}
}