We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When I tried to run DGAMM-LSTM on gpu I got the following message:
can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first
The text was updated successfully, but these errors were encountered:
I faced the same issue with the plain DAGMM. Worked around this by adding .cpu() and .cuda() as follows in src/algorithms/dagmm.py:
Line 231: pinv = np.linalg.pinv(cov_k.data.cpu().numpy()) Line 243: cov_inverse = torch.cat(cov_inverse, dim=0).cuda()
pinv = np.linalg.pinv(cov_k.data.cpu().numpy())
cov_inverse = torch.cat(cov_inverse, dim=0).cuda()
I hope this helps you too.
Sorry, something went wrong.
I faced the same issue with the plain DAGMM. Worked around this by adding .cpu() and .cuda() as follows in src/algorithms/dagmm.py: Line 231: pinv = np.linalg.pinv(cov_k.data.cpu().numpy()) Line 243: cov_inverse = torch.cat(cov_inverse, dim=0).cuda() I hope this helps you too.
This works, thanks!
No branches or pull requests
When I tried to run DGAMM-LSTM on gpu I got the following message:
can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first
The text was updated successfully, but these errors were encountered: