记录使用GAN相关影响力较大的论文. 对于GAN在各领域的应用, 如使用GAN进行图像恢复, inpainting等, 只记录较重要的一些论文.
可能与其它方向的论文记录有重叠.
-
Exploiting Deep Generative Prior for Versatile Image Restoration and Manipulation
Xingang Pan, Xiaohang Zhan, Bo Dai, Dahua Lin, Chen Change Loy, Ping Luo
[ECCV 2020 Oral] [Pytorch-Code]
[DGP] [★★] (online finetune, 无需训练样本对) 提出用预训练的GAN作为先验, 无需在特定任务上finetune, 即可实现超分, 上色等图像恢复任务和图像变形,类别转换等图像编辑功能. 论文主要是在一般GAN inversion的基础上, 提出同时优化隐向量z和生成网络参数, 达到了更好更自然的效果. -
Seeing What a GAN Cannot Generate
David Bau, Jun-Yan Zhu, Jonas Wulff, William Peebles, Hendrik Strobelt, Bolei Zhou, Antonio Torralba
[ICCV 2019] [Project] [Pytorch-Code]
[GANSeeing] -
SinGAN:Learning a generative Model from a Single Natural Image
Tamar Rott Shaham, Tali Dekel, Tomer Michaeli
[ICCV 2019 Best Paper] [Pytorch-Code]
[SinGAN] -
Self-Attention Generative Adversarial Networks
Han Zhang, Ian Goodfellow, Dimitris Metaxas, Augustus Odena
[ICML 2019] [Pytorch-Code] [TF-Code1] [TF-Code2]
[SAGAN] -
The relativistic discriminator: a key element missing from standard GAN
Alexia Jolicoeur-Martineau
[ICLR 2019] [Code]
[RAGAN] [★★☆] 提出使用真假样本直接的相对距离作为衡量discriminator的准则 -
Blind Super-Resolution Kernel Estimation using an Internal-GAN
Sefi Bell-Kligler, Assaf Shocher, Michal Irani
[NIPS 2019 Oral] [Project] [Pytorch-Code]
[KernelGAN] [★★☆] (zero-shot, 降质核估计) 无监督预测降质核并进行超分的方法. 使用若干个现象卷积层的GAN预测降质kernel, 训练的的GAN可以合成一个kernel, 作为该图形的降质核, 网络训练采用LSGAN和若干正则项构成. 预测的模糊核作为ZSSR的降质核, 再无监督地预测炒粉结果 -
Spectral Normalization for Generative Adversarial Networks
Takeru Miyato, Toshiki Kataoka, Masanori Koyama, [Yuichi Yoshida](http://research.nii.ac.jp/~yyoshida/
[ICLR 2018 Oral] [Chainer-Code] [Blog1] [Blog2]
[★★☆] (提升GAN稳定性) 非常常用的一种对判别器的正则化方法, 注意不能与BN同时使用 -
Progressive Growing of GANs
Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
[ICLR 2018] [Pytorch-Code]
[★★☆] -
Least Squares Generative Adversarial Networks
Xudong Mao, Qing Li, Haoran Xie, Raymond Y.K. Lau, Zhen Wang, Stephen Paul Smolley
[ICCV 2017] [TF-Code]
[LSGAN] [★★☆] -
Wasserstein generative adversarial networks
** Towards Principled Methods for Training Generative Adversarial Networks**
** Improved training of wasserstein gans
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen
[ICLR 2017] [TF-Code] [Blog]
[WGAN] [★★★] 提出了一种改进的GAN训练方式 -
Conditional Image Synthesis with Auxiliary Classifier GANs
Augustus Odena, Christopher Olah, Jonathon Shlens
[ICML 2017] [Unofficial-TF-Code] [Unofficial-Pytorch-Code]
输入包括噪声和label, 判别器输出真假和label. 可用于生成分类数据. -
Improved Techniques for Training GANs
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen
[arXiv 2016] [TF-Code]
[★★] 提出了更好训练GAN的几个技巧 -
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, Pieter Abbeel
[arXiv 1606] [TF-Code] -
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
Sebastian Nowozin, Botond Cseke, Ryota Tomioka
[NIPS 2016] [TF-Code]
[f-GAN] [★] 经典生成网络 -
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
Alec Radford, Luke Metz, Soumith Chintala
[arXiv 1511] [TF-Code]
[★★] 经典生成网络 -
Conditional Generative Adversarial Nets
Mehdi Mirza, Simon Osindero
[arXiv 1411]
[★★] 给定类别的图像生成, 把label分别加入G和D的输入中