Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backwards convolution layers in CUDA provider should heed #19391

Open
BengtGustafsson opened this issue Feb 2, 2024 · 3 comments
Open

Backwards convolution layers in CUDA provider should heed #19391

BengtGustafsson opened this issue Feb 2, 2024 · 3 comments
Labels
ep:CUDA issues related to the CUDA execution provider

Comments

@BengtGustafsson
Copy link
Contributor

Describe the issue

Today the cudnnFindConvolutionBackwardEx function is always called by conv_transpose.cc regardless of the optimization level reported by OrtCUDAProviderOptions::cudnn_conv_algo_search (returned by GetCudnnConvAlgo()) unlike conv.cc where cudnnFindConvolutionForwardAlgorithmEx is only called on the OrtCudnnConvAlgoSearchExhaustive level.

This causes uncontrollable output differences on different GPU hardware which makes it impossible to do testing on a set of different systems.

For us it would be ok to gang the settings for forwards and backwards convolutions, but adding a separate field for backwards convolution optimization level would also work, and be more backwards compatible.

To reproduce

Run a DNN containing backwards convolution layer with optimization level DEFAULT and notice that the numerical result varies from run to run on newer hardware, and more so between different GPU generations.

For networks without backwards convolution layers this does not happen for the DEFAULT level.

Urgency

Our project deadline is end of Q1 and we don't know how we will be able to test our complete system on different hardware with this problem messing up our output data consistency.

Platform

Other / Unknown

OS Version

All

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.16.3

ONNX Runtime API

C++

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

No response

@snnn snnn added the ep:CUDA issues related to the CUDA execution provider label Feb 2, 2024
Copy link
Contributor

github-actions bot commented Mar 4, 2024

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Mar 4, 2024
@BengtGustafsson
Copy link
Contributor Author

It would be good to get a reaction on this. Is it a good suggestion, do you think? Should it be a separate setting?

@github-actions github-actions bot removed the stale issues that have not been addressed in a while; categorized by a bot label Mar 11, 2024
@BengtGustafsson
Copy link
Contributor Author

Without this feature we could not ship the CUDA provider for our product and had to make do with DirectML even on NVIDIA devices.

I don't know what to do when noone even looks at the issue for 6 months. Do I have to implement it and try to get the PR pushed to get some reaction?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider
Projects
None yet
Development

No branches or pull requests

2 participants