Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! #30

Open
trra1988 opened this issue Apr 23, 2021 · 2 comments

Comments

@trra1988
Copy link

Appriciate for release code, I have a little question is how to set gpu to train the model, when I train the model this error show up, thanks

"""
The device argument should be set by using torch.device or passing a string as an argument. This behavior will be deprecated soon and currently defaults to cpu.
training model...
Traceback (most recent call last): ] 0% loss = ...
File "train.py", line 183, in
main()
File "train.py", line 111, in main
train_model(model, opt)
File "train.py", line 34, in train_model
src_mask, trg_mask = create_masks(src, trg_input, opt)
File "/home/lin/program/Transformer-master/Batch.py", line 26, in create_masks
trg_mask = trg_mask & np_mask
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
"""

@A-Kerim
Copy link

A-Kerim commented Apr 23, 2021

@trra1988 you may need to use :

src_mask, trg_mask = create_masks(src.cuda(), trg_input.cuda(), opt)

instead of

 src_mask, trg_mask = create_masks(src, trg_input, opt) 

@trra1988
Copy link
Author

@A-Kerim
question is solved, thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants