Skip to content

Latest commit

 

History

History
93 lines (64 loc) · 4.05 KB

README.md

File metadata and controls

93 lines (64 loc) · 4.05 KB

Pose-Guided Tracking-by-Detection: Robust Multi-Person Pose Tracking

overview

Introduction

This paper addresses the multi-person pose tracking task that aims to estimate and track person pose keypoints in video. We propose a pose-guided tracking-by-detection framework which fuses pose information into both video human detection and data association procedures. Specifically, we adopt the pose-guided single object tracker to exploit the temporal information for making up missing detections in the video human detection stage. Furthermore, we propose a hierarchical pose-guided graph convolutional networks (PoseGCN) based appearance discriminative model in the data association stage. The GCN-based model exploits the human structural relations to boost the person representation.

Overview

  • This the implementation of the Pose-Guided Tracking-by-Detection: Robust Multi-Person Pose Tracking.
  • This repo focuses on the major contribtion of our methods.

Main Results

Results on Posetrack 2017 Datasets comparing with the other methods, the result on Val is 68.4 and on Test is 60.2, which achieves SOTA.

Quick Start

Install

  1. Create an anaconda environment named PGPT whose python ==3.7, and activate it

  2. Install pytorch==0.4.0 following official instuction

  3. Clone this repo, and we'll call the directory that you cloned as ${PGPT_ROOT}

  4. Install dependencies

    pip install -r requirements.txt
    

Demo

  1. Download the demo dataset and demo_val, and put them into the data folder in the following manner:

    ${PGPT_ROOT}
     |--data
         |--demodata
             |--images
             |--annotations
         |--demo_val.json
    
    • You can also use your own data in the same data format and data organization as the demo dataset.
  2. Download the PoseGCN model and Tracker model, and put them into the models folder in the following manner:

    ${PGPT_ROOT}
     |--models
         |--pose_gcn.pth.tar
         |--tracker.pth
    
  3. Download the results of detection for demo, and put the results in the results in the following manner

    • Right now we don't provide the detection and the pose model which we implement. We implement the module based on the Faster RCNN for detection model and Simple Baseline for pose estimation model. You can clone their repo and train your own detection and pose estimation module.

    • In order to smoothly run the demo, we provide demo_detection.json which is the demo results of our detection model. Meanwhile, you can run the demo with your own detection results in the same format as demo_detection.json.

       ${PGPT_ROOT}
        |--results
        	  |--demo_detection.json
      
  4. You can run the demo by the following codes:

    cd ${PGPT_ROOT}
    sh demo.sh
    
    • Store the JSON results on the ${PGPT_ROOT}/results/demo
    • Store the results on the ${PGPT_ROOT}/results/render

Note

  • You can modify the inference/config.py to suit the path of your own.
  • We still arrange the project of our method, and we will release the whole project later.

Citation

If you use this code for your research, please consider citing:

@InProceedings{TMM2020-PGPT,

title = {Pose-Guided Tracking-by-Detection: Robust Multi-Person Pose Tracking},

author = {Q. Bao, W. Liu, Y. Cheng, B. Zhou and T. Mei},

booktitle = { IEEE Transactions on Multimedia},

year = {2020} }