Model | Input size | AP | Ap .5 | AP .75 | AR | AR .5 | AR .75 | AP easy | AP medium | AP hard | Download | Log |
---|---|---|---|---|---|---|---|---|---|---|---|---|
I²R-Net (Vanilla version, 1st stage:HRNet-W48-S) | 256x192 | 0.723 | 0.924 | 0.779 | 0.765 | 0.932 | 0.819 | 0.799 | 0.732 | 0.628 | model | log |
I²R-Net (1st stage:TransPose-H) | 256x192 | 0.763 | 0.935 | 0.822 | 0.791 | 0.940 | 0.844 | 0.832 | 0.770 | 0.674 | model | log |
I²R-Net (1st stage:HRFormer-B) | 256x192 | 0.774 | 0.936 | 0.833 | 0.803 | 0.945 | 0.855 | 0.838 | 0.781 | 0.693 | model | log |
Model | Input size | AP | Ap .5 | AP .75 | Download | Log |
---|---|---|---|---|---|---|
I²R-Net (Vanilla version, 1st stage:HRNet-W48-S) | 256x192 | 0.643 | 0.850 | 0.692 | model | log |
I²R-Net (1st stage:TransPose-H) | 256x192 | 0.665 | 0.838 | 0.714 | model | log |
I²R-Net (1st stage:HRFormer-B) | 256x192 | 0.678 | 0.850 | 0.728 | model | log |
Model | Input size | AP | Ap .5 | AP .75 | AP (M) | AP (L) | AR | AR (M) | AR (L) | Download | Log |
---|---|---|---|---|---|---|---|---|---|---|---|
I²R-Net (Vanilla version, 1st stage:HRNet-W48-S) | 256x192 | 0.753 | 0.902 | 0.819 | 0.717 | 0.824 | 0.805 | 0.761 | 0.868 | model | log |
I²R-Net (1st stage:TransPose-H) | 256x192 | 0.758 | 0.904 | 0.821 | 0.720 | 0.829 | 0.809 | 0.766 | 0.873 | model | log |
I²R-Net (1st stage:HRFormer-B) | 256x192 | 0.764 | 0.908 | 0.832 | 0.723 | 0.837 | 0.814 | 0.769 | 0.881 | model | log |
I²R-Net (1st stage:HRFormer-B) | 384x288 | 0.773 | 0.910 | 0.836 | 0.730 | 0.845 | 0.821 | 0.777 | 0.886 | model | log |
-
Clone this repository, and we'll call the directory that you cloned as ${POSE_ROOT}
git https://github.com/leijue222/Intra-and-Inter-Human-Relation-Network-for-MPEE.git
-
Install Python=3.8 and PyTorch=1.10 from the PyTorch official website
-
Install package dependencies.
pip install -r requirements.txt
git clone https://github.com/Jeff-sjtu/CrowdPose.git cd CrowdPose/crowdpose-api/PythonAPI/ sh install.sh cd ../../../ rm -rf CrowdPose
git clone https://github.com/liruilong940607/OCHumanApi cd OCHumanApi make install cd .. rm -rf OCHumanApi
cd ${POSE_ROOT}/lib make
-
Download Intra Relation model.
-
Download Inter Relation model.
Downloaded images from here, json file can also download from here.
${POSE_ROOT}/data/crowdpose/
|-- json
| |-- crowdpose_train.json
| |-- crowdpose_val.json
| |-- crowdpose_trainval.json
| `-- crowdpose_test.json
`-- images
|-- 100000.jpg
|-- ...
Downloaded images from here, json file can also download from here.
${POSE_ROOT}/data/crowdpose/
|-- ochuman_coco_format_val_range_0.00_1.00
|-- ochuman_coco_format_test_range_0.00_1.00.json
`-- images
|-- 000001.jpg
|-- ...
We follow the steps of HRNet to prepare the COCO train/val/test dataset and the annotations. The detected person results are downloaded from OneDrive or GoogleDrive. Please download or link them to ${POSE_ROOT}/data/coco/, and make them look like this:
${POSE_ROOT}/data/coco/
|-- annotations
| |-- person_keypoints_train2017.json
| `-- person_keypoints_val2017.json
|-- person_detection_results
| |-- COCO_val2017_detections_AP_H_56_person.json
| `-- COCO_test-dev2017_detections_AP_H_609_person.json
`-- images
|-- train2017
| |-- 000000000009.jpg
| |-- ...
`-- val2017
|-- 000000000139.jpg
|-- ...
torchrun --nproc_per_node=8 tools/ddp_train.py --cfg experiments/crowdpose/interformer_crowdpose_w48_pure_en6.yaml
torchrun --nproc_per_node=8 tools/ddp_train.py --cfg experiments/OCHuman/interformer_ochuman_tph_192_p3_b8.yaml
torchrun --nproc_per_node=8 tools/ddp_train.py --cfg experiments/coco/interformer_coco_hrt_288_p2_b4.yaml
python tools/test.py --cfg experiments/crowdpose/interformer_crowdpose_w48_pure_en6.yaml
python tools/test.py --cfg experiments/OCHuman/interformer_ochuman_tph_192_p3_b8.yaml
python tools/test.py --cfg experiments/coco/interformer_coco_hrt_288_p2_b4.yaml TEST.USE_GT_BBOX False
python tools/test.py --cfg experiments/coco/interformer_coco_hrt_288_p2_b4.yaml TEST.USE_GT_BBOX True
Great thanks for these papers and their open-source codes:HRNet, TransPose, HRFormer
If you use our code or models in your research, please cite with:
@misc{https://doi.org/10.48550/arxiv.2206.10892,
doi = {10.48550/ARXIV.2206.10892},
url = {https://arxiv.org/abs/2206.10892},
author = {Ding, Yiwei and Deng, Wenjin and Zheng, Yinglin and Liu, Pengfei and Wang, Meihong and Cheng, Xuan and Bao, Jianmin and Chen, Dong and Zeng, Ming},
title = {I^2R-Net: Intra- and Inter-Human Relation Network for Multi-Person Pose Estimation},
publisher = {arXiv},
year = {2022},
}