Skip to content
/ ASTD Public

Our paper, Knowledge-Based Systems , "Learning Adaptive Shift and Task Decoupling for Discriminative One-Step Person Search"

Notifications You must be signed in to change notification settings

zqx951102/ASTD

Repository files navigation

↳ Stargazers

Stargazers repo roster for @zqx951102/ASTD

↳ Forkers

Forkers repo roster for @zqx951102/ASTD

Python >=3.5 PyTorch >=1.0 License: MIT

图片名称

This repository hosts the source code of our paper: Learning adaptive shift and task decoupling for discriminative one-step person search.

major challenges:

The network structure:


🔥 NEWS 🔥

  • [09/2024] 📣Congratulations, our paper has been accepted by Knowledge-Based Systems!.

  • [08/2024] 📣We received comments requiring minor revisions from the Journal of Knowledge-Based Systems.

  • [07/2024] 📣We submitted our paper to the Journal of Knowledge-Based Systems!

  • [07/2024] 📣We released the code.

Installation

Run pip install -r requirements.txt in the root directory of the project.

Quick Start

Let's say $ROOT is the root directory.

  1. Download CUHK-SYSU and PRW datasets, and unzip them to $ROOT/data
data
├── CUHK-SYSU
├── PRW
exp_cuhk
├── config.yaml
├── epoch_xx.pth
├── epoch_xx.pth
exp_prw
├── config.yaml
├── epoch_xx.pth 
├── epoch_xx.pth
  1. Following the link in the above table, download our pretrained model to anywhere you like, e.g., $ROOT/exp_cuhk

Performance profile:

Dataset Name ASTD
CUHK-SYSU epoch_12.pth model
PRW epoch_15.pth model
  1. Run an inference demo by specifing the paths of checkpoint and corresponding configuration file. You can checkout the result in demo_imgs directory.

CUHK-SYSU:

CUDA_VISIBLE_DEVICES=0 python demo.py --cfg ./configs/cuhk_sysu.yaml --ckpt ./logs/cuhk-sysu/xxx.pth

PRW:

CUDA_VISIBLE_DEVICES=0 python demo.py --cfg ./configs/prw.yaml --ckpt ./logs/prw/xxx.pth

Please see the Demo photo:

## Training

Pick one configuration file you like in $ROOT/configs, and run with it.

python train.py --cfg configs/cuhk_sysu.yaml

Note: At present, our script only supports single GPU training, but distributed training will be also supported in future. By default, the batch size and the learning rate during training are set to 3 and 0.003 respectively, which requires about 28GB of GPU memory. If your GPU cannot provide the required memory, try smaller batch size and learning rate (performance may degrade). Specifically, your setting should follow the Linear Scaling Rule: When the minibatch size is multiplied by k, multiply the learning rate by k. For example:

CUHK:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/cuhk_sysu.yaml INPUT.BATCH_SIZE_TRAIN 3 SOLVER.BASE_LR 0.003 SOLVER.MAX_EPOCHS 20 SOLVER.LR_DECAY_MILESTONES [11] MODEL.LOSS.USE_SOFTMAX True SOLVER.LW_RCNN_SOFTMAX_2ND 0.1 SOLVER.LW_RCNN_SOFTMAX_3RD 0.1 OUTPUT_DIR ./logs/cuhk-sysu

if out of memory, run this:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/cuhk_sysu.yaml INPUT.BATCH_SIZE_TRAIN 2 SOLVER.BASE_LR 0.0012 SOLVER.MAX_EPOCHS 20 SOLVER.LR_DECAY_MILESTONES [11] MODEL.LOSS.USE_SOFTMAX True SOLVER.LW_RCNN_SOFTMAX_2ND 0.1 SOLVER.LW_RCNN_SOFTMAX_3RD 0.1 OUTPUT_DIR ./logs/cuhk-sysu
PRW:
CUDA_VISIBLE_DEVICES=0 python train.py --cfg configs/prw.yaml INPUT.BATCH_SIZE_TRAIN 3 SOLVER.BASE_LR 0.003 SOLVER.MAX_EPOCHS 14 SOLVER.LR_DECAY_MILESTONES [11] MODEL.LOSS.USE_SOFTMAX True SOLVER.LW_RCNN_SOFTMAX_2ND 0.1 SOLVER.LW_RCNN_SOFTMAX_3RD 0.1 OUTPUT_DIR ./logs/prw 


Tip: If the training process stops unexpectedly, you can resume from the specified checkpoint.

python train.py --cfg configs/cuhk_sysu.yaml --resume --ckpt /path/to/your/checkpoint

Test

Suppose the output directory is $ROOT/exp_cuhk. Test the trained model:

For CUHK-SYSU:

CUDA_VISIBLE_DEVICES=0 python train.py --cfg ./configs/cuhk_sysu.yaml --eval --ckpt ./logs/cuhk-sysu/xxx.pth

Test with Context Bipartite Graph Matching algorithm:

CUDA_VISIBLE_DEVICES=0 python train.py --cfg ./configs/cuhk_sysu.yaml --eval --ckpt ./logs/cuhk-sysu/xxx.pth EVAL_USE_CBGM True

Test the upper bound of the person search performance by using GT boxes:

CUDA_VISIBLE_DEVICES=0 python train.py --cfg ./configs/cuhk_sysu.yaml --eval --ckpt ./logs/cuhk-sysu/xxx.pth EVAL_USE_GT True

For PRW:

CUDA_VISIBLE_DEVICES=0 python train.py --cfg ./configs/prw.yaml --eval --ckpt ./logs/prw/xxx.pth EVAL_USE_CBGM True

Comparison with SOTA:

Evaluation of different gallery size:

Remember that when you test other code, you still need to set it to 100!!

Visualization of ASA:

Qualitative Results:

Acknowledgment

Thanks to the authors of the following repos for their code, which was integral in this project:

Pull Request

Pull request is welcomed! Before submitting a PR, DO NOT forget to run ./dev/linter.sh that provides syntax checking and code style optimation.

Citation

If you find this code useful for your research, please cite our paper

@article{zhang2024learning,
  title={Learning adaptive shift and task decoupling for discriminative one-step person search},
  author={Zhang, Qixian and Miao, Duoqian and Zhang, Qi and Wang, Changwei and Li, Yanping and Zhang, Hongyun and Zhao, Cairong},
  journal={Knowledge-Based Systems},
  volume={304},
  pages={112483},
  year={2024},
  publisher={Elsevier}
}
@article{zhang2024attentive,
  title={Attentive multi-granularity perception network for person search},
  author={Zhang, Qixian and Wu, Jun and Miao, Duoqian and Zhao, Cairong and Zhang, Qi},
  journal={Information Sciences},
  volume={681},
  pages={121191},
  year={2024},
  publisher={Elsevier}
}
@inproceedings{li2021sequential,
  title={Sequential End-to-end Network for Efficient Person Search},
  author={Li, Zhengjia and Miao, Duoqian},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={35},
  number={3},
  pages={2011--2019},
  year={2021}
}

Contact

If you have any question, please feel free to contact us. E-mail: [email protected]

About

Our paper, Knowledge-Based Systems , "Learning Adaptive Shift and Task Decoupling for Discriminative One-Step Person Search"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages