-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss raise to abnormal and batchsize #124
Comments
I have the same problem. The device I used is the RTX 3090ti. After 200 epochs, both the char loss and edge loss grow graduallty. |
I'm in the same situation as you. How can I solve it? |
torch.nn.utils.clip_grad_norm_(self.net.parameters(), 0.01) |
Could you tell me where to put this code? |
loss.backward()
torch.nn.utils.clip_grad_norm_(model_restoration.parameters(), 0.01)
optimizer.step() |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Loss raises to several million after 50 epochs (Before 50 epoch is normal). And why I can only allow batchsize 2 on RTX3090 when training, 2 more will out of memory.
The text was updated successfully, but these errors were encountered: