[Build] Docker build with enable_cuda_profiling failing #18763
Labels
build
build issues; typically submitted using template
ep:CUDA
issues related to the CUDA execution provider
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
Describe the issue
I'm trying to build onnxruntime with cuda profiling enabled. I use the Dockerfile.cuda file and add in the
--enable_cuda_profiling
argument however the build fails. I've also tried this multiple times locally instead of using the dockerfile and it also fails. The build script is the Dockerfile.cuda from the rel-1.16.3 branch but with the --enable_cuda_profiling flag included. It fails somewhere on the build call. If I run this locally, it fails onflash_fwd_split_hdim128_fp16_sm80.cu
hereUrgency
No response
Target platform
Linux 5.15.0-89-generic x86_64
Build script
Error / output
Visual Studio Version
No response
GCC / Compiler Version
No response
The text was updated successfully, but these errors were encountered: