Human pose estimation is the computer vision task of estimating the configuration (‘the pose’) of the human body by localizing certain key points on a body within a video or a photo. The following application serves as a reference to deploy custom pose estimation models with DeepStream 5.0 using the TRTPose project as an example.
A detailed deep-dive NVIDIA Developer blog is available here.
Input Video Source | Output Video | |
You will need
- DeepStreamSDK 5.x
- CUDA 10.2
- TensorRT 7.x
- Set the default docker runtime
Add "default-runtime": "nvidia" to your /etc/docker/daemon.json configuration file to run the process of the ONNX to TensorRT model conversion in the image build.
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
Then, reboot your system before proceeding.
- Build the image
git clone https://github.com/MACNICA-CLAVIS-NV/deepstream_pose_estimation
cd deepstream_pose_estimation
chmod +x *.sh
./docker_build.sh
- Run the application
You need to have a USB camera at /dev/video0 on your host L4T OS.
./docker_run.sh
Note: This release supports only for JetPack 4.5.1.
If you want to run this on other versions of JetPack, modify the following line in Dockerfile to select the base image which support your version. Refer to the DeepStream-l4t repository page in NVIDIA NGC to find the right base image.
ARG BASE_IMAGE=nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples
To get started, please follow these steps.
- Install DeepStream on your platform, verify it is working by running deepstream-app.
- Clone the repository in your directory.
- Download the TRTPose model, convert it to ONNX using this export utility, and set its location in the DeepStream configuration file.
Or you can use the pose_estimation.onnx in this repository. - Compile the program
$ cd deepstream-pose-estimation/
$ make
$ ./deepstream-pose-estimation-app <file-uri> <output-path>
- The final output is rendered to the display with X11/EGL and is also stored in 'output-path' as
Pose_Estimation.mp4
. - You can input image from V4L2 camera like USB web cam.
$ ./deepstream-pose-estimation-app <camera device>
Here is a example:
$ ./deepstream-pose-estimation-app /dev/video0
NOTE: If you do not already have a .trt engine generated from the ONNX model you provided to DeepStream, an engine will be created on the first run of the application. Depending upon the system you’re using, this may take anywhere from 4 to 10 minutes.
For any issues or questions, please feel free to make a new post on the DeepStreamSDK forums.
Cao, Zhe, et al. "Realtime multi-person 2d pose estimation using part affinity fields." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.
Xiao, Bin, Haiping Wu, and Yichen Wei. "Simple baselines for human pose estimation and tracking." Proceedings of the European Conference on Computer Vision (ECCV). 2018.