Pytorch + cutlass : RuntimeError: CUDA Error cudaError_t.cudaSuccess #974
Replies: 5 comments 2 replies
-
Hi, @parshakova. Could you please provide the following information:
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
The output to print statement is |
Beta Was this translation helpful? Give feedback.
-
Thanks, @parshakova. Taking a closer look at the original screenshot you posted, it appears that the call to I tested out the following in CUDA Python 12.1 and 11.5: print(cuda.CUresult.CUDA_SUCCESS == cudart.cudaError_t.cudaSuccess) With 12.1, it prints In any case, I expect that this could be circumvented by comparing the enum values. The following prints print(cuda.CUresult.CUDA_SUCCESS.value == cudart.cudaError_t.cudaSuccess.value) Would you be willing to try out changing the line here to the following? err, = cudart.cudaDeviceSynchronize()
- if err != cuda.CUresult.CUDA_SUCCESS:
+ if err.value != cuda.CUresult.CUDA_SUCCESS.value:
raise RuntimeError("CUDA Error %s" % str(err)) You may also need to make a similar change for normal GEMMs here and here. |
Beta Was this translation helpful? Give feedback.
-
Thanks it runs now. But I run into a new problem |
Beta Was this translation helpful? Give feedback.
-
following this https://github.com/NVIDIA/cutlass/blob/main/examples/python/02_pytorch_extension_grouped_gemm.ipynb
notebook, when running
the last line gives an error
RuntimeError: CUDA Error cudaError_t.cudaSuccess
. Please see the screenshot.Do anyone know how I can fix this error?
Beta Was this translation helpful? Give feedback.
All reactions