Skip to content

The source code for "Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts"

Notifications You must be signed in to change notification settings

sgalkina/poe-vaes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PoE-based variational autoencoders

The source code for "Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts"

The misc folder contains precomputed vectorized representations for CUB-captions dataset and pretrained oracle networks for MNIST and SVHN.

The mmvae_mnist_split folder contains the source code for MMVAE with the MNIST-Split experiment implementation.

The data for CUB-Captions can be downloaded from http://www.robots.ox.ac.uk/~yshi/mmdgm/datasets/cub.zip

There are four PoE models available for training:

  • VAEVAE from Wu et al. "Multimodal generative models for compositional representation learning"
  • SVAE from the current paper
  • VAEVAE_star - VAEVAE architecture and SVAE loss function
  • SVAE_star - SVAE architecture and VAEVAE loss function

The command line template for training the model is:

python experiments/<experiment_name>/run.py <model_name> <share of unpaired samples> <optional: evaluation mode>

For example

python experiments/mnist_split/run.py SVAE 0.9

will run the training for SVAE model with 10% supervision level

python experiments/mnist_split/run.py SVAE 0.9 eval best

will generate evaluation metrics and images for the best epoch of the training

python experiments/mnist_split/run.py SVAE 0.9 eval current

will generate evaluation metrics and images for the last epoch

About

The source code for "Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published