Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation #12

Open
ghzmwhdk777 opened this issue Dec 2, 2020 · 2 comments

Comments

@ghzmwhdk777
Copy link

start training...

Training epoch: 1
/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py:92: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
img = torch.from_numpy(np.array(pic, np.float32, copy=False))
/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py:92: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
img = torch.from_numpy(np.array(pic, np.float32, copy=False))
/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py:92: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
img = torch.from_numpy(np.array(pic, np.float32, copy=False))
/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py:92: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
img = torch.from_numpy(np.array(pic, np.float32, copy=False))
Traceback (most recent call last):
File "main.py", line 120, in
main(mode=1)
File "main.py", line 50, in main
model.train()
File "/content/drive/MyDrive/Colab Notebooks/SR/edge-informed-sisr-master/src/edge_match.py", line 133, in train
self.sr_model.backward(gen_loss, dis_loss)
File "/content/drive/MyDrive/Colab Notebooks/SR/edge-informed-sisr-master/src/models.py", line 283, in backward
gen_loss.backward()
File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 512, 4, 4]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).


somebody know this error and answer??

@jakubcaputa
Copy link

Hey, did you solve this issue?

@zhangzheng2242
Copy link

开始训练...

训练时期:1 /usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py:92:UserWarning:给定的 NumPy 数组不可写,PyTorch 不支持不可写张量。这意味着您可以使用张量写入底层(假定不可写)NumPy 数组。您可能希望复制数组以保护其数据或使其可写,然后再将其转换为张量。对于该程序的其余部分,将禁止此类警告。(在 /pytorch/torch/csrc/utils/tensor_numpy.cpp:141 内部触发。) img = torch.from_numpy(np.array(pic, np.float32, copy=False)) /usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py:92:用户警告:给定的 NumPy 数组不可写,并且 PyTorch 不支持不可写张量。这意味着您可以使用张量写入底层(假定不可写)NumPy 数组。您可能希望复制数组以保护其数据或使其可写,然后再将其转换为张量。对于该程序的其余部分,将禁止此类警告。(在 /pytorch/torch/csrc/utils/tensor_numpy.cpp:141 内部触发。) img = torch.from_numpy(np.array(pic, np.float32, copy=False)) /usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py:92:用户警告:给定的 NumPy 数组不可写,并且 PyTorch 不支持不可写张量。这意味着您可以使用张量写入底层(假定不可写)NumPy 数组。您可能希望复制数组以保护其数据或使其可写,然后再将其转换为张量。对于该程序的其余部分,将禁止此类警告。(在 /pytorch/torch/csrc/utils/tensor_numpy.cpp:141 内部触发。) img = torch.from_numpy(np.array(pic, np.float32, copy=False)) /usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py:92:用户警告:给定的 NumPy 数组不可写,并且 PyTorch 不支持不可写张量。这意味着您可以使用张量写入底层(假定不可写)NumPy 数组。您可能希望复制数组以保护其数据或使其可写,然后再将其转换为张量。对于该程序的其余部分,将禁止此类警告。(在 /pytorch/torch/csrc/utils/tensor_numpy.cpp:141 内部触发。) img = torch.from_numpy(np.array(pic, np.float32, copy=False)) Traceback(最近一次调用最后一次): 文件“main.py”,第 120 行,在 main(mode=1) 文件“main.py”,第 50 行,在 main model.train() 文件“/content/drive/MyDrive/Colab Notebooks/SR/edge-informed-sisr-master/src/edge_match.py​​”,第 133 行,在火车 self.sr_model.backward(gen_loss, dis_loss) 文件“/content/drive /MyDrive/Colab Notebooks/SR/edge-informed-sisr-master/src/models.py”,第 283 行,在向后的 gen_loss.backward() 文件中“/usr/local/lib/python3.6/dist-packages/ torch/tensor.py”,第 221 行,后向 torch.autograd.backward(self, gradient, retain_graph, create_graph)文件 “/usr/local/lib/python3.6/dist-packages/torch/autograd/init .py ",第 132 行,后向 allow_unreachable=True) #allow_unreachable 标志****

RuntimeError:梯度计算所需的变量之一已被就地操作修改:[torch.cuda.FloatTensor [1, 512, 4, 4]] 为版本 2;而是预期的版本 1。提示:启用异常检测以查找未能计算其梯度的操作,使用 torch.autograd.set_detect_anomaly(True)。

有人知道这个错误并回答吗?

Hey, did you solve this issue?

Hey, did you solve this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants