You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to use Double precision floating point on GPU?
As I understand, it is impossible in cutorch as CudaTensor is a single precision floating point tensor.
And that about ClTorch? Don't you plan to add this?
The text was updated successfully, but these errors were encountered:
I wasn't planning on adding it... mostly because I'm mostly targeting neural nets, and latest trends are to go to lower precision, ie fp16. Do you mind if I ask your use-case? Doesnt mean I will suddenly jump up and do it, but would be good to at least understand what you are trying to achieve.
I would be in favour of just making the precision selectable, including both 64 bit doubles and 16 bit halfs. As someone relatively new to neural networks and Torch, I like to experiment to see what the differences are. I'm very scientific in how I approach this, so I would just like to run my network at 16 bit precision, 32 bit precision and 64 bit precision to see what the differences are. I don't like to just take people at their word that smaller is better, I want to see it for myself. It helps me to learn and understand.
Is it possible to use Double precision floating point on GPU?
As I understand, it is impossible in cutorch as CudaTensor is a single precision floating point tensor.
And that about ClTorch? Don't you plan to add this?
The text was updated successfully, but these errors were encountered: