Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hierarchical KL loss #17

Open
ycyang18 opened this issue Oct 5, 2023 · 1 comment
Open

hierarchical KL loss #17

ycyang18 opened this issue Oct 5, 2023 · 1 comment

Comments

@ycyang18
Copy link

ycyang18 commented Oct 5, 2023

Hi! Very Impressive work, thanks for sharing!
I have a question regarding the hierarchical KL loss. As in the original paper, the hierarchical kl loss is stated as:

∑_L [KL(q(z_l | x, z_(l-1)) || p(z_l | z_(l-1)))]

,between encoder and decoder.

I am wording why did you model the KL loss between p(z_l | x, z_(l-1)) and p(z_l | z_(l-1)), which both are from decoder?
mu, log_var = self.condition_z[i](decoder_out).chunk(2, dim=1)
delta_mu, delta_log_var = self.condition_xz[i](torch.cat([xs[i], decoder_out], dim=1)).chunk(2, dim=1)
kl_losses.append(kl_2(delta_mu, delta_log_var, mu, log_var))

Please let me know if there are any misunderstandings.
Thanks a lot in advance!:)

@alephpi
Copy link

alephpi commented Oct 22, 2024

Maybe it's too late, but you can check this issue. In fact, what the author denotes as $p(z_l|x, z_(l-1))$ is exactly the inference model $q$, since it's only the inference model that conditionally dependent on the input $x$.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants