You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I really like the look of the paper, only had a chance to skim through it so far, but looks really nice.
Reading your paragraph in the abstract:
We learn a model by leveraging hierarchical features: (a)structural information contained in the shallow layers and (b)semantic information extracted from the deep layers. To enable the training of deep I2I models on small datasets, we propose a novel transfer learning method, that transfers knowledge from pre-trained GANs.
To the best of our knowledge, transferring knowledge from pre-trained GANs to I2I translation is not explored yet.
I'd suggest that maybe the above paper is an example of this, as the Toonification results in section 3 is an image to image translation application produced by leveraging pre-trained GANs.
Interested to hear your thoughts!
Cheers,
Justin
The text was updated successfully, but these errors were encountered:
Hi Justin
Thank you.
Your paper looks interesting. Specially, you provides a method to interpolate image between two models (e.g., the based model and the adapted model) instead of the one shown in StyleGAN, which is able to synthesize new image using single model (i.e. the based model). The generated image is convincing and amazing. I really like it.
My work is different (but related ) to the usage of the pre-trained model. We investigated the knowledge transferring for the traditional I2I translation, where user can manipulate image on target domain (e.g. content and style), like cyclegan, stargan, munit, drit, etc..
Finally, thank you for your interesting. Your work is great.
Hi, I really like the look of the paper, only had a chance to skim through it so far, but looks really nice.
Reading your paragraph in the abstract:
I feel like there is a strong connection to my own research in Resolution Dependent GAN Interpolation for Controllable Image Synthesis Between Domains. Where we explore controlled interpolation of the "hierarchical features" (as you describe them) between two models generated by transfer learning. In particular when you note that:
I'd suggest that maybe the above paper is an example of this, as the Toonification results in section 3 is an image to image translation application produced by leveraging pre-trained GANs.
Interested to hear your thoughts!
Cheers,
Justin
The text was updated successfully, but these errors were encountered: