-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
out of memory error #5
Comments
I also come across such a problem, have you fixed it? |
I try to replace "volatile" with "with torch.no_grad()", and it does work since the out-of-memory error doesn't appear. Maybe it's because of the version of Pytorch. Here is one of the example where the origin code use 'volatile=True' and how to modify it. |
------------ Options -------------
batch_size: 64
beta1: 0.9
beta2: 0.999
channels: 1
checkpoints_path: ./checkpoints
cuda: True
dataset_path: ./dataset
debug: False
ensemble: 1
epochs: 30
gpu_ids: 1
log_interval: 50
lr: 0.001
network: 0
no_cuda: False
patch_stride: 256
seed: 1
test_batch_size: 1
testset_path: ./dataset/test
-------------- End ----------------
Loading "patch-wise" model...
Loading "patch-wise" model...
/home/zaikun/zaikun/ICIAR2018/src/models.py:222: UserWarning: volatile was removed and now has no effect. Use
with torch.no_grad():
instead.res = self.network.features(Variable(input_tensor, volatile=True))
00) Normal (100.0%) test0.tif
Traceback (most recent call last):
File "test.py", line 23, in
im_model.test(args.testset_path, ensemble=args.ensemble == 1)
File "/home/zaikun/zaikun/ICIAR2018/src/models.py", line 430, in test
patches = self.patch_wise_model.output(image)
File "/home/zaikun/zaikun/ICIAR2018/src/models.py", line 222, in output
res = self.network.features(Variable(input_tensor, volatile=True))
File "/home/zaikun/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/zaikun/.local/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/home/zaikun/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/zaikun/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA error: out of memory
I am using 24GB of gpu memory and even with a small test batch size, I still have this error when running the test command, I guess something is wrong here.
The text was updated successfully, but these errors were encountered: