Skip to content
/ MuG Public

Learning Video Object Segmentation from Unlabeled Videos (CVPR2020)

Notifications You must be signed in to change notification settings

carrierlxk/MuG

Repository files navigation

MuG

code for CVPR 2020 paper: Learning Video Object Segmentation from Unlabeled Videos

Pre-compute results

The segmentation results of object--level zero-shot VOS (DAVIS16-val dataset), instance-level zero-shot VOS (DAVIS2017-test-dev dataset) and one-shot VOS (DAVIS2016-val and DAVIS 2017-val datasets) under both unsupervised and weakly supervised conditionscan be download from GoogleDrive.

Code runing

  1. Setup environment: Pytorch 1.1.0, tqdm, scipy 1.2.1.
  2. Prepare training data. Download training datasets from Got10k tracking dataset or Youtube-VOS dataset. Generate a csv file in a format of 'GOT-10k_Train_000001, 120'. The first term is video name, the second term is video length.
  3. Download the weakly supervised saliency generation model and inference code from here and unsupervised saliency detection from [here] (https://github.com/ruanxiang/mr_saliency)
  4. Change all the paths in MuG_GOT_global_new_residual.py, my_model_new_residual.py and libs/model_match_residual.py. Run run_train_all_GOT_global_new_residual.sh for network training.
  5. Run run_ZVOS.sh for network inference.

Other related projects/papers:

Zero-shot Video Object Segmentation via Attentive Graph Neural Networks

See More, Know More: Unsupervised Video Object Segmentation with Co-Attention Siamese Networks(CVPR19)

Saliency-Aware Geodesic Video Object Segmentation (CVPR15)

Learning Unsupervised Video Primary Object Segmentation through Visual Attention (CVPR19)

Joint-task Self-supervised Learning for Temporal Correspondence

Any comments, please email: [email protected]

About

Learning Video Object Segmentation from Unlabeled Videos (CVPR2020)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published