You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Parameters of G_A and G_B are chained together in the optimizer. loss_G calculates both loss of G_A and G_B. Thus they can be backpropagate together and update the paramerters in one go.
See line 103 (optimizer) and line 202-233 (G_loss) for details.
loss_G is summed through loss_G_A and loss_G_B.
In the original design, Parameters of G_A and G_B are chained together and backpropagate using the same loss (loss_G )
Should G_A backpropagate with loss_G_A and G_B backpropagate with loss_G_B and update the paramerters separately?
Should G_A backpropagate with loss_G_A and G_B backpropagate with loss_G_B and update the paramerters separately?
Yes, the parameters should be updated separately. Here the implementation is actually identical. ( pytorch automatically matches the loss and corresponding parameters).
Hello, in dcl_model.py
why can you use self.loss_G to do backpropagation with both G_A and G_B ?
Is there any special way to handle this?
The text was updated successfully, but these errors were encountered: