Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relation to Resolution dependent GAN interpolation? #1

Open
justinpinkney opened this issue Nov 15, 2020 · 1 comment
Open

Relation to Resolution dependent GAN interpolation? #1

justinpinkney opened this issue Nov 15, 2020 · 1 comment

Comments

@justinpinkney
Copy link

Hi, I really like the look of the paper, only had a chance to skim through it so far, but looks really nice.

Reading your paragraph in the abstract:

We learn a model by leveraging hierarchical features: (a)structural information contained in the shallow layers and (b)semantic information extracted from the deep layers. To enable the training of deep I2I models on small datasets, we propose a novel transfer learning method, that transfers knowledge from pre-trained GANs.

I feel like there is a strong connection to my own research in Resolution Dependent GAN Interpolation for Controllable Image Synthesis Between Domains. Where we explore controlled interpolation of the "hierarchical features" (as you describe them) between two models generated by transfer learning. In particular when you note that:

To the best of our knowledge, transferring knowledge from pre-trained GANs to I2I translation is not explored yet.

I'd suggest that maybe the above paper is an example of this, as the Toonification results in section 3 is an image to image translation application produced by leveraging pre-trained GANs.

Interested to hear your thoughts!
Cheers,
Justin

@yaxingwang
Copy link
Owner

Hi Justin
Thank you.
Your paper looks interesting. Specially, you provides a method to interpolate image between two models (e.g., the based model and the adapted model) instead of the one shown in StyleGAN, which is able to synthesize new image using single model (i.e. the based model). The generated image is convincing and amazing. I really like it.

My work is different (but related ) to the usage of the pre-trained model. We investigated the knowledge transferring for the traditional I2I translation, where user can manipulate image on target domain (e.g. content and style), like cyclegan, stargan, munit, drit, etc..

Finally, thank you for your interesting. Your work is great.

Best
Yaxing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants