Skip to content

Tensorflow implementation of RankGan (Adversarial Ranking for Language Generation)

Notifications You must be signed in to change notification settings

yuanmengzhixing/RankGan-NIPS2017

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Ranking for Language Generation

Introduction

This is a tensorflow implementation of Adversarial Ranking for Language Generation by Kevin Lin*, Dianqi Li*, Xiaodong He, Zhengyou Zhang, Ming-Ting Sun, NIPS 2017.

Environment

The code is based on python2.7 and tensorflow 1.2 version. The code is developed and tested using one NVIDIA M40 GPU.

Run

python main.py

Note

  • For a fair comparison, we used same LSTMs units, pre-train and test configurations in SeqGan. If you use tf.contrib.rnn.LSTMCell instead of their LSTMs implementation, you will obtain different training results.
  • save/target_params.pkl is the parameter for the oracle model from SeqGan. log folder stores the log of your model training.
  • Ideally, you will receive a nll loss between 8.00-8.50. However, adversarial training sometimes depends on the quality of pre-train model.
  • More evaluation metrics for adversarial text generation can be refered to: paper and repo.

Citing RankGan

if you find RankGan is useful in your research, please consider citing:

@inproceedings{lin2017adversarial,
  title={Adversarial ranking for language generation},
  author={Lin, Kevin and Li, Dianqi and He, Xiaodong and Zhang, Zhengyou and Sun, Ming-Ting},
  booktitle={Advances in Neural Information Processing Systems},
  pages={3155--3165},
  year={2017}
}

Acknowledgements

This code is based on SeqGan. Many thanks for the authors!

About

Tensorflow implementation of RankGan (Adversarial Ranking for Language Generation)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%