This is a official implementation of the CycleContrast introduced in the paper:Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency
If you find our work useful, please cite:
@article{wu2021contrastive,
title={Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency},
author={Wu, Haiping and Wang, Xiaolong},
journal={arXiv preprint arXiv:2105.06463},
year={2021}
}
Our code is tested on Python 3.7 and Pytorch 1.3.0, please install the environment via
pip install -r requirements.txt
We provide the model pretrained on R2V2 for 200 epochs.
method | pre-train epochs on R2V2 dataset | ImageNet Top-1 Linear Eval | OTB Precision | OTB Success | UCF Top-1 | pretrained model |
---|---|---|---|---|---|---|
MoCo | 200 | 53.8 | 56.1 | 40.6 | 80.5 | pretrain ckpt |
CycleContrast | 200 | 55.7 | 69.6 | 50.4 | 82.8 | pretrain ckpt |
Download R2V2 (Random Related Video Views) dataset according to https://github.com/danielgordon10/vince.
The direction structure should be as followed:
CycleContrast
├── cycle_contrast
├── scripts
├── utils
├── data
│ ├── r2v2_large_with_ids
│ │ ├── train
│ │ │ ├── --/
│ │ │ ├── -_/
│ │ │ ├── _-/
│ │ │ ├── __/
│ │ │ ├── -0/
│ │ │ ├── _0/
│ │ │ ├── ...
│ │ │ ├── zZ/
│ │ │ ├── zz/
│ │ ├── val
│ │ │ ├── --/
│ │ │ ├── -_/
│ │ │ ├── _-/
│ │ │ ├── __/
│ │ │ ├── -0/
│ │ │ ├── _0/
│ │ │ ├── ...
│ │ │ ├── zZ/
│ │ │ ├── zz/
./scripts/train_cycle.sh
Prepare ImageNet dataset according to pytorch ImageNet training code.
MODEL_DIR=output/cycle_res50_r2v2_ep200
IMAGENET_DATA=data/ILSVRC/Data/CLS-LOC
./scripts/eval_ImageNet.sh $MODEL_DIR $IMAGENET_DATA
Transfer to OTB tracking evaluation is based on SiamFC-Pytorch. Please prepare environment and data according to SiamFC-Pytorch
git clone https://github.com/happywu/mmaction2-CycleContrast
# path to your pretrained model, change accordingly
CycleContrast=/home/user/code/CycleContrast
PRETRAIN=${CycleContrast}/output/cycle_res50_r2v2_ep200/checkpoint_0199.pth.tar
cd mmaction2_tracking
./scripts/submit_r2v2_r50_cycle.py ${PRETRAIN}
Transfer to UCF action recognition evaluation is based on AVID-CMA, prepare data and env according to AVID-CMA.
git clone https://github.com/happywu/AVID-CMA-CycleContrast
# path to your pretrained model, change accordingly
CycleContrast=/home/user/code/CycleContrast
PRETRAIN=${CycleContrast}/output/cycle_res50_r2v2_ep200/checkpoint_0199.pth.tar
cd AVID-CMA-CycleContrast
./scripts/submit_r2v2_r50_cycle.py ${PRETRAIN}
The codebase is based on FAIR-MoCo. The OTB tracking evaluation is based on MMAction2, SiamFC-PyTorch and vince. The UCF classification evaluation follows AVID-CMA.
Thank you all for the great open source repositories!