Skip to content

srl-ethz/mmvaeplus

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MMVAE+: Enhancing the Generative Quality of Multimodal VAEs without Compromises

Official PyTorch implementation for MMVAE+, introduced in the paper MMVAE+: Enhancing the Generative Quality of Multimodal VAEs without Compromises, published at ICLR 2023.

UPDATE: Jul 2024 new improved code release!

Download datasets

datasets_dfigure

mkdir data 
cd data 
curl -L -o data_ICLR_2.zip https://polybox.ethz.ch/index.php/s/wmAXzDAKn3Qogp7/download
unzip data_ICLR_2.zip 
curl -L -o cub.zip http://www.robots.ox.ac.uk/~yshi/mmdgm/datasets/cub.zip
unzip cub.zip

Experiments

Run on PolyMNIST dataset

bash commands/run_polyMNIST_experiment.sh

Run on CUB Image-Captions dataset

bash commands/run_CUB_experiment.sh

Citing

@inproceedings{
palumbo2023mmvaeplus,
title={{MMVAE}+: Enhancing the Generative Quality of Multimodal {VAE}s without Compromises},
author={Emanuele Palumbo and Imant Daunhawer and Julia E Vogt},
booktitle={International Conference on Learning Representations },
year={2023},
}

Acknowledgements

We thank the authors of the MMVAE repo, from which our codebase is based, and from which we retrieve the link to the CUB Image-Captions dataset. We also thank he authors of the MoPoE for useful code.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 89.3%
  • Python 10.6%
  • Shell 0.1%