You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[BUG] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
#197
Open
dddlli opened this issue
Apr 13, 2024
· 5 comments
File "/home/pete/PycharmProjects/Time-Series-Classification-master/model/mmm4tsc.py", line 224, in forward
fused = self.visual_expert(concat)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/zeta/nn/modules/visual_expert.py", line 106, in call
normalized = self.norm(x)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/normalization.py", line 196, in forward
return F.layer_norm(
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/functional.py", line 2543, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
Upvote & Fund
We're using Polar.sh so you can upvote and help fund this issue.
We receive the funding once the issue is completed & confirmed by you.
Thank you in advance for helping prioritize & fund our backlog.
The text was updated successfully, but these errors were encountered:
File "/home/pete/PycharmProjects/Time-Series-Classification-master/model/mmm4tsc.py", line 224, in forward
fused = self.visual_expert(concat)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/zeta/nn/modules/visual_expert.py", line 106, in call
normalized = self.norm(x)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/normalization.py", line 196, in forward
return F.layer_norm(
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/functional.py", line 2543, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
Upvote & Fund
The text was updated successfully, but these errors were encountered: