Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault with CUDA execution provider #19165

Open
prattcmp opened this issue Jan 16, 2024 · 4 comments
Open

Segmentation fault with CUDA execution provider #19165

prattcmp opened this issue Jan 16, 2024 · 4 comments
Labels
ep:CUDA issues related to the CUDA execution provider stale issues that have not been addressed in a while; categorized by a bot

Comments

@prattcmp
Copy link

Describe the issue

The ONNX runtime produces a segmentation fault when used with the CUDA execution provider. GDB inspect returns:

Successfully registered `CUDAExecutionProvider`
Successfully registered `CPUExecutionProvider`
new_cpu; allocator=Arena memory_type=Default
new_cpu; allocator=Arena memory_type=Default
new_cpu; allocator=Arena memory_type=Default
new_cpu; allocator=Arena memory_type=Default
new_cpu; allocator=Arena memory_type=Default
new_cpu; allocator=Arena memory_type=Default
Segmentation fault (core dumped)
#0  0x00007fc64625c520 in onnxruntime::common::Status onnxruntime::contrib::cuda::QkvToContext<__half>(cudaDeviceProp const&, cublasContext*&, onnxruntime::Stream*, onnxruntime::contrib::AttentionParameters&, onnxruntime::contrib::cuda::AttentionData<__half>&) () from libonnxruntime_providers_cuda.so
#1  0x00007fc6461acc7b in onnxruntime::contrib::cuda::Attention<onnxruntime::MLFloat16>::ComputeInternal(onnxruntime::OpKernelContext*) const () from libonnxruntime_providers_cuda.so
#2  0x00007fc645c35b86 in onnxruntime::cuda::CudaKernel::Compute(onnxruntime::OpKernelContext*) const () from libonnxruntime_providers_cuda.so
#3  0x00007fc671c4dd0d in ?? () from /libonnxruntime.so
#4  0x00007fc671c42ea9 in ?? () from /libonnxruntime.so
#5  0x00007fc671c51a4d in ?? () from /libonnxruntime.so

To reproduce

  1. Install ONNX runtime onto AWS EKS container.
  2. Load BAAI/bge-base-en-v1.5 model.
  3. Run inference using inference session.

Urgency

This is blocking us from deploying to GPU in production.

Platform

Linux

OS Version

Ubuntu 22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.16.3

ONNX Runtime API

C++

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA 11.6

@github-actions github-actions bot added the ep:CUDA issues related to the CUDA execution provider label Jan 16, 2024
@xadupre
Copy link
Member

xadupre commented Jan 17, 2024

Did you try on CPU to see whether it is related the kernel implementation in CUDA or maybe an error in the model?

@prattcmp
Copy link
Author

Did you try on CPU to see whether it is related the kernel implementation in CUDA or maybe an error in the model?

Yes, it works fine on CPU.

@prattcmp
Copy link
Author

This appears to be an issue with the K8s Device Plugin.

Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Feb 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

2 participants