Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using self.loss_G to backpropagation both G_A and G_B #6

Open
kk2487 opened this issue Feb 14, 2022 · 3 comments
Open

using self.loss_G to backpropagation both G_A and G_B #6

kk2487 opened this issue Feb 14, 2022 · 3 comments

Comments

@kk2487
Copy link

kk2487 commented Feb 14, 2022

Hello, in dcl_model.py

why can you use self.loss_G to do backpropagation with both G_A and G_B ?
Is there any special way to handle this?

@JunlinHan
Copy link
Owner

Hi kk2487,
Thanks for your questions.

Parameters of G_A and G_B are chained together in the optimizer. loss_G calculates both loss of G_A and G_B. Thus they can be backpropagate together and update the paramerters in one go.

See line 103 (optimizer) and line 202-233 (G_loss) for details.

@kk2487
Copy link
Author

kk2487 commented Feb 15, 2022

thanks for your response.

I have another question.

loss_G is summed through loss_G_A and loss_G_B.
In the original design, Parameters of G_A and G_B are chained together and backpropagate using the same loss (loss_G )

Should G_A backpropagate with loss_G_A and G_B backpropagate with loss_G_B and update the paramerters separately?

@JunlinHan
Copy link
Owner

Should G_A backpropagate with loss_G_A and G_B backpropagate with loss_G_B and update the paramerters separately?

Yes, the parameters should be updated separately. Here the implementation is actually identical. ( pytorch automatically matches the loss and corresponding parameters).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants