Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiment of using Instance Normalization vs Layer Normalization on the Decoder (MUNIT) #171

Open
tom99763 opened this issue Nov 12, 2022 · 0 comments

Comments

@tom99763
Copy link

Here are the results of using different normalization on the decoder.
圖片1

By the computation operation of the normalization methods, the MUNIT architecture can be summarized as follows.

圖片2

This means that since there's no tuning channel correlation on the upsampling layer (i.e., Adaptive Instance Normalization, StyleGAN), if you use instance normalization during upsampling, the tunned channel correlation (ResNet + Adaptive Instance Normalization) will be destroyed.

@tom99763 tom99763 changed the title Experiment of using Instance Normalization vs Layer Normalization on the Decoder Experiment of using Instance Normalization vs Layer Normalization on the Decoder (MUNIT) Nov 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant