Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error caused by in-place operation in SAC #50

Open
VansWaston opened this issue Jan 15, 2025 · 0 comments
Open

Error caused by in-place operation in SAC #50

VansWaston opened this issue Jan 15, 2025 · 0 comments

Comments

@VansWaston
Copy link

I met the error as follow:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [256, 1]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

at line pi_loss.backward(retain_graph = True) :

self.policy_optimizer.zero_grad()
pi_loss.backward(retain_graph = True)
nn.utils.clip_grad_norm_(self.policy_net.parameters(), 0.5)
self.policy_optimizer.step()

after tried using torch.autograd.set_detect_anomaly(True) , the error report is as follows:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [256, 1]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

does there anyone have met this problem? thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant