We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CMakeLists.txt needs to be modified. Also suggested in #116 (comment) .
Otherwise test_gpu does not pass and CTCLoss returns 0.0.
test_gpu
IF (CUDA_VERSION GREATER 7.6) set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS} -gencode arch=compute_60,code=sm_60") set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS} -gencode arch=compute_61,code=sm_61") set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS} -gencode arch=compute_62,code=sm_62") set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS} -gencode arch=compute_70,code=sm_70") ## new line ENDIF()
The text was updated successfully, but these errors were encountered:
you can use CTC Loss in pytorch 1.0.1 directly without warp_ctc
Sorry, something went wrong.
warp-ctc does support this compute architecture already, the change was made here
No branches or pull requests
CMakeLists.txt needs to be modified. Also suggested in #116 (comment) .
Otherwise
test_gpu
does not pass and CTCLoss returns 0.0.The text was updated successfully, but these errors were encountered: