You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm run the train.py by custom dataset successfully on colab/colab pro. in March 6 2023.
I start my script to install everything I need, and I got this error yeserday :
"Traceback (most recent call last):
File "train.py", line 707, in
train(0, args=args)
File "train.py", line 433, in train
losses = backward_and_log(run_name, net_outs, targets, masks, num_crowds)
File "train.py", line 325, in backward_and_log
losses = criterion(out, targets, masks, num_crowds)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/ColabNotebooks/yolact_edge-master/yolact_edge/layers/modules/multibox_loss.py", line 159, in forward
losses.update(self.lincomb_mask_loss(pos, idx_t, loc_data, mask_data, priors, proto_data, masks, gt_box_t, inst_data))
File "/content/gdrive/MyDrive/ColabNotebooks/yolact_edge-master/yolact_edge/layers/modules/multibox_loss.py", line 442, in lincomb_mask_loss
downsampled_masks = F.interpolate(masks[idx].unsqueeze(0), (mask_h, mask_w), File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 3866, in interpolate
raise ValueError(
ValueError: Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of [256] and output size of (138, 138). Please provide input tensor in (N, C, d1, d2, ...,dK) format and output size in (o1, o2, ...,oK) format."
I had try to downgrade:
CUDA 11.8 -> 11.6 (follow with correct version of cudnn and TensorRT)
Python 3.9-> 3.8
The tensorflow-gpu(2.9.2) and pytorch(1.13.1+cu116) version just as before.
Total 4 combination of version, but still got the same error.
I guess it is related with some package verison.
Can I get any one pip list if your train.py still working?
The text was updated successfully, but these errors were encountered:
yu123569
changed the title
Problem with multiprocessing_context
Problem with input with spatial dimensions of [256] and output size of (138, 138)
Mar 14, 2023
I'm run the train.py by custom dataset successfully on colab/colab pro. in March 6 2023.
I start my script to install everything I need, and I got this error yeserday :
"Traceback (most recent call last):
File "train.py", line 707, in
train(0, args=args)
File "train.py", line 433, in train
losses = backward_and_log(run_name, net_outs, targets, masks, num_crowds)
File "train.py", line 325, in backward_and_log
losses = criterion(out, targets, masks, num_crowds)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/content/gdrive/MyDrive/ColabNotebooks/yolact_edge-master/yolact_edge/layers/modules/multibox_loss.py", line 159, in forward
losses.update(self.lincomb_mask_loss(pos, idx_t, loc_data, mask_data, priors, proto_data, masks, gt_box_t, inst_data))
File "/content/gdrive/MyDrive/ColabNotebooks/yolact_edge-master/yolact_edge/layers/modules/multibox_loss.py", line 442, in lincomb_mask_loss
downsampled_masks = F.interpolate(masks[idx].unsqueeze(0), (mask_h, mask_w),
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 3866, in interpolate
raise ValueError(
ValueError: Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of [256] and output size of (138, 138). Please provide input tensor in (N, C, d1, d2, ...,dK) format and output size in (o1, o2, ...,oK) format."
I had try to downgrade:
CUDA 11.8 -> 11.6 (follow with correct version of cudnn and TensorRT)
Python 3.9-> 3.8
The tensorflow-gpu(2.9.2) and pytorch(1.13.1+cu116) version just as before.
Total 4 combination of version, but still got the same error.
I guess it is related with some package verison.
Can I get any one pip list if your train.py still working?
The text was updated successfully, but these errors were encountered: