Replies: 2 comments 3 replies
-
Hi~ sorry for my late reply. We were going to solve this issue by PR (Setting any environment variable by configuration file). But this solution seems a bit dangered, do you have any opinion about this? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi, I'm not sure about the best way to do it. A (maybe naive) way may just be like this? if cfg.cuda_matmul_tf32:
torch.backends.cuda.matmul.allow_tf32 = True
else:
torch.backends.cuda.matmul.allow_tf32 = False
if cfg.cudnn_tf32:
torch.backends.cudnn.allow_tf32 = True
else:
torch.backends.cudnn.allow_tf32 = False The feature would be greatly appreciated in any form though, since the speedups are very significant |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Thanks
Beta Was this translation helpful? Give feedback.
All reactions