Skip to content

kaelsunkiller/VaT

Repository files navigation

Variational Transformer

Code of paper Variational Transformer: A Framework Beyond the Trade-off between Accuracy and Diversity for Image Captioning

Merged from https://github.com/ruotianluo/self-critical.pytorch

All data files and preprocessing steps can be found in the page above.

Train VAT using config files in configs/vat

Citation

If you find this work useful in your research, please cite:

@article{yang2024variational,
  title={Variational Transformer: A Framework Beyond the Tradeoff Between Accuracy and Diversity for Image Captioning},
  author={Yang, Longzhen and He, Lianghua and Hu, Die and Liu, Yihang and Peng, Yitao and Chen, Hongzhou and Zhou, MengChu},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2024},
  publisher={IEEE}
}

About

No description or website provided.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published