Skip to content

Deep Pose Estimation implemented using Tensorflow with Custom Architectures for fast inference.

License

Notifications You must be signed in to change notification settings

errno-mmd/tf-pose-estimation

 
 

Repository files navigation

tf-pose-estimation

'Openpose', human pose estimation algorithm, have been implemented using Tensorflow. It also provides several variants that have some changes to the network structure for real-time processing on the CPU or low-power embedded devices.

You can even run this on your macbook with a descent FPS!

Original Repo(Caffe) : https://github.com/CMU-Perceptual-Computing-Lab/openpose

CMU's Original Model
on Macbook Pro 15"
Mobilenet-thin
on Macbook Pro 15"
Mobilenet-thin
on Jetson TX2
cmu-model mb-model-macbook mb-model-tx2
~0.6 FPS ~4.2 FPS @ 368x368 ~10 FPS @ 368x368
2.8GHz Quad-core i7 2.8GHz Quad-core i7 Jetson TX2 Embedded Board

Implemented features are listed here : features

Important Updates

Install

Dependencies

You need dependencies below.

Pre-Install Jetson case

$ sudo apt-get install libllvm-7-ocaml-dev libllvm7 llvm-7 llvm-7-dev llvm-7-doc llvm-7-examples llvm-7-runtime
$ export LLVM_CONFIG=/usr/bin/llvm-config-7 

Install

Clone the repo and install 3rd-party libraries.

git clone or download & extract ZIP
$ cd tf-pose-estimation
$ pip3 install -r requirements.txt

Build c++ library for post processing. See : https://github.com/ildoonet/tf-pose-estimation/tree/master/tf_pose/pafprocess

$ cd tf_pose/pafprocess
$ swig -python -c++ pafprocess.i && python3 setup.py build_ext --inplace

Package Install

Alternatively, you can install this repo as a shared package using pip.

git clone or download & extract ZIP
$ cd tf-pose-estimation
$ python setup.py install  # Or, `pip install -e .`

Models & Performances

See experiments.md

Download Tensorflow Graph File(pb file)

Before running demo, you should download graph files. You can deploy this graph on your mobile or other platforms.

  • cmu (trained in 656x368)
  • mobilenet_thin (trained in 432x368)
  • mobilenet_v2_large (trained in 432x368)
  • mobilenet_v2_small (trained in 432x368)

CMU's model graphs are too large for git, so I uploaded them on an external cloud. You should download them if you want to use cmu's original model. Download scripts are provided in the model folder.

$ cd models/graph/cmu
$ bash download.sh

Demo

Test Inference

You can test the inference feature with a single image.

$ python run.py --model=mobilenet_thin --resize=432x368 --image=./images/p1.jpg

The image flag MUST be relative to the src folder with no "~", i.e:

--image ../../Desktop

Then you will see the screen as below with pafmap, heatmap, result and etc.

inferent_result

Realtime Webcam

$ python run_webcam.py --model=mobilenet_thin --resize=432x368 --camera=0

Apply TensoRT

$ python run_webcam.py --model=mobilenet_thin --resize=432x368 --camera=0 --tensorrt=True

Then you will see the realtime webcam screen with estimated poses as below. This Realtime Result was recored on macbook pro 13" with 3.1Ghz Dual-Core CPU.

Python Usage

This pose estimator provides simple python classes that you can use in your applications.

See run.py or run_webcam.py as references.

e = TfPoseEstimator(get_graph_path(args.model), target_size=(w, h))
humans = e.inference(image)
image = TfPoseEstimator.draw_humans(image, humans, imgcopy=False)

If you installed it as a package,

import tf_pose
coco_style = tf_pose.infer(image_path)

ROS Support

See : etcs/ros.md

Training

See : etcs/training.md

References

See : etcs/reference.md

About

Deep Pose Estimation implemented using Tensorflow with Custom Architectures for fast inference.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • PureBasic 80.7%
  • Python 8.1%
  • C++ 7.1%
  • C 4.0%
  • Shell 0.1%
  • SWIG 0.0%