This is an official PyTorch implementation of "OIMNet++: Prototypical Normalization and Localization-aware Learning for Person Search", ECCV 2022.
For more details, visit our project site or see our paper.
- Python 3.8
- PyTorch 1.7.1
- GPU memory >= 22GB
- Re-implementation of vanilla OIMNet
- Using AMP to train with larger batch size with limited GPU memory
First, clone our git repository.
We highly recommend using our Dockerfile to set up the environment.
# build docker image
$ docker build -t oimnetplus:latest .
# execute docker container
$ docker run --ipc=host -it -v <working_dir>:/workspace/work -v <dataset_dir>:/workspace/dataset -w /workspace/work oimnetplus:latest /bin/bash
Download PRW and CUHK-SYSU datasets.
Modify the dataset directories below if necessary.
- PRW: L4 of configs/prw.yaml
- CUHK-SYSU: L3 of configs/ssm.yaml
Your directories should look like:
<working_dir>
OIMNetPlus
├── configs/
├── datasets/
├── engines/
├── losses/
├── models/
├── utils/
├── defaults.py
├── Dockerfile
└── train.py
<dataset_dir>
├── CUHK-SYSU/
│ ├── annotation/
│ ├── Image/
│ └── ...
└── PRW-v16.04.20/
├── annotations/
├── frames/
├── query_box/
└── ...
-
OIMNet++
$ python train.py --cfg configs/prw.yaml
$ python train.py --cfg configs/ssm.yaml
-
OIMNet+++
$ python train.py --cfg configs/prw.yaml MODEL.ROI_HEAD.AUGMENT True
$ python train.py --cfg configs/ssm.yaml MODEL.ROI_HEAD.AUGMENT True
-
OIMNet
$ python train.py --cfg configs/prw.yaml MODEL.ROI_HEAD.NORM_TYPE 'none' MODEL.LOSS.TYPE 'OIM'
$ python train.py --cfg configs/ssm.yaml MODEL.ROI_HEAD.NORM_TYPE 'none' MODEL.LOSS.TYPE 'OIM'
By running the commands, evaluation results and training losses will be logged into a .txt file in the output directory.
We provide pretrained weights and the correponding configs below.
OIMNet++ | OIMNet+++ | |
---|---|---|
PRW | model config |
model config |
CUHK-SYSU | model config |
model config |
Our person search implementation is heavily based on Di Chen's NAE and Zhengjia Li's SeqNet.
ProtoNorm implementation is based on ptrblck's manual BatchNorm implementation here.