Skip to content

testplm/pytorch_Realtime_Multi-Person_Pose_Estimation

 
 

Repository files navigation

Introduction

pytorch version of Multi-Person_Pose_Estimation

Results

License

Require

  1. Pytorch
  2. Caffe is required if you want convert caffe model to a pytorch model.
  3. pip install pycocotools
  4. pip install tensorboardX
  5. pip install torch-encoding

Demo

  • Download converted pytorch model.
  • cd network/caffe_to_pytorch; python convert.py to convert a trained caffe model to pytorch model. The converted model have relative error less than 1e-6, and will be located in ./network/weight after convert.
  • Or use the model trained from scratch in this repo, which has better accuracy on the validataion set.
  • python demo/picture_demo.py to run the picture demo.
  • python demo/web_demo.py to run the web demo.

Evalute

  • python evaluate/evaluation.py to evaluate the model on images seperated by the rtpose author
  • It should have mAP 0.598 for the rtpose, previous rtpose have mAP 0.577 because we do left and right flip for heatmap and PAF for the evaluation. c

Pretrained Models & Performance on the dataset split by the rtpose.

rtpose, trained from scratch (Notice the preprocessing is different for different models)

Reported on paper (VGG19) mAP in this repo (VGG19) Trained from scratch in this repo
0.577 0.598 0.614

Training

  • cd training; bash getData.sh to obtain the COCO images in dataset/COCO/images/, keypoints annotations in dataset/COCO/annotations/
  • Download the mask of the unlabeled person at Dropbox
  • Download the official training format at Dropbox
  • python train_VGG19.py --batch_size 100 --logdir {where to store tensorboardX logs}
  • python train_ShuffleNetV2.py --batch_size 160 --logdir {where to store tensorboardX logs}
  • python train_SH.py --batch_size 64 --lr 0.1 --logdir {where to store tensorboardX logs}

Related repository

Network Architecture

  • testing architecture Teaser?

  • training architecture Teaser?

Contributions

All contributions are welcomed. If you encounter any issue (including examples of images where it fails) feel free to open an issue.

Citation

Please cite the paper in your publications if it helps your research:

@INPROCEEDINGS{8486591, 
author={Haoqian Wang and Wang Peng An and X. Wang and L. Fang and J. Yuan}, 
booktitle={2018 IEEE International Conference on Multimedia and Expo (ICME)}, 
title={Magnify-Net for Multi-Person 2D Pose Estimation}, 
year={2018}, 
volume={}, 
number={}, 
pages={1-6}, 
month={July},}

@InProceedings{cao2017realtime,
  title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
  author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2017}
  }

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.6%
  • Shell 0.4%