You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have a check on import of the darts theorist. If the user is using the CPU version of torch (autora dependency), it will warn the user that in order to use GPU resources they should upgrade to the GPU version. I'm not sure if there is a way to detect GPU resource availability on the computer. Ideally it would only post the warning if they are using the cpu version of torch AND they have GPU resources.
We can use torch.cuda.is_available() to check if cuda is available. However, I'm not sure if this is a check on the computer or PyTorch. In other words is this a check on if the computer has GPU and the CUDA module? Or is this a check if PyTorch is configured and able to communicate with CUDA.
torch.cuda.device_count() returns the number of GPUs available. I assume CUDA has to be properly set up for torch to return this information though.
To test:
Build an environment with CPU version of torch on an Oscar instance with GPU. See how these commands function.
Do the same with the GPU version
The text was updated successfully, but these errors were encountered:
Have a check on import of the darts theorist. If the user is using the CPU version of torch (autora dependency), it will warn the user that in order to use GPU resources they should upgrade to the GPU version. I'm not sure if there is a way to detect GPU resource availability on the computer. Ideally it would only post the warning if they are using the cpu version of torch AND they have GPU resources.
We can use
torch.cuda.is_available()
to check if cuda is available. However, I'm not sure if this is a check on the computer or PyTorch. In other words is this a check on if the computer has GPU and the CUDA module? Or is this a check if PyTorch is configured and able to communicate with CUDA.torch.cuda.device_count()
returns the number of GPUs available. I assume CUDA has to be properly set up for torch to return this information though.To test:
The text was updated successfully, but these errors were encountered: