Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
/ MeMViT Public archive

Code Release for MeMViT Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition, CVPR 2022

License

Notifications You must be signed in to change notification settings

facebookresearch/MeMViT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition

This is a PyTorch implementation of the MeMViT paper (CVPR 2022 oral):

@inproceedings{memvit2022,
  title={{MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition}},
  author={Wu, Chao-Yuan and Li, Yanghao and Mangalam, Karttikeya and Fan, Haoqi and Xiong, Bo and Malik, Jitendra and Feichtenhofer, Christoph},
  booktitle={CVPR},
  year={2022}
}

MeMViT builds on the MViT models:

@inproceedings{li2021improved,
  title={{MViTv2}: Improved multiscale vision transformers for classification and detection},
  author={Li, Yanghao and Wu, Chao-Yuan and Fan, Haoqi and Mangalam, Karttikeya and Xiong, Bo and Malik, Jitendra and Feichtenhofer, Christoph},
  booktitle={CVPR},
  year={2022}
}
@inproceedings{fan2021multiscale,
  title={Multiscale vision transformers},
  author={Fan, Haoqi and Xiong, Bo and Mangalam, Karttikeya and Li, Yanghao and Yan, Zhicheng and Malik, Jitendra and Feichtenhofer, Christoph},
  booktitle={ICCV},
  year={2021}
}

Model checkpoints

On the AVA dataset:

name mAP #params (M) GFLOPs pre-train model model
MeMViT-16, 16x4 29.3 35.4 58.7 K400-pretrained model model
MeMViT-24, 32x3 32.3 52.6 211.7 K600-pretrained model model
MeMViT-24, 32x3 34.4 52.6 211.7 K700-pretrained model model

Installation

This repo is a modification on the PySlowFast repo. Installation and preparation follow that repo.

Training

Please modify the data paths and pre-training checkpoint path in config file accordingly and run, e.g.,

python tools/run_net.py \
  --cfg configs/AVA/MeMViT_16_K400.yaml \

Evaluation

To evaluate a pretrained MeMViT model:

python tools/run_net.py \
  --cfg configs/AVA/MeMViT_16_K400.yaml \
  TRAIN.ENABLE False \
  TEST.CHECKPOINT_FILE_PATH path_to_your_checkpoint \

Acknowledgement

This repository is built based on the PySlowFast.

License

MeMViT is released under the CC-BY-NC 4.0.

About

Code Release for MeMViT Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition, CVPR 2022

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages