You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Today the cudnnFindConvolutionBackwardEx function is always called by conv_transpose.cc regardless of the optimization level reported by OrtCUDAProviderOptions::cudnn_conv_algo_search (returned by GetCudnnConvAlgo()) unlike conv.cc where cudnnFindConvolutionForwardAlgorithmEx is only called on the OrtCudnnConvAlgoSearchExhaustive level.
This causes uncontrollable output differences on different GPU hardware which makes it impossible to do testing on a set of different systems.
For us it would be ok to gang the settings for forwards and backwards convolutions, but adding a separate field for backwards convolution optimization level would also work, and be more backwards compatible.
To reproduce
Run a DNN containing backwards convolution layer with optimization level DEFAULT and notice that the numerical result varies from run to run on newer hardware, and more so between different GPU generations.
For networks without backwards convolution layers this does not happen for the DEFAULT level.
Urgency
Our project deadline is end of Q1 and we don't know how we will be able to test our complete system on different hardware with this problem messing up our output data consistency.
Platform
Other / Unknown
OS Version
All
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.16.3
ONNX Runtime API
C++
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Describe the issue
Today the cudnnFindConvolutionBackwardEx function is always called by conv_transpose.cc regardless of the optimization level reported by OrtCUDAProviderOptions::cudnn_conv_algo_search (returned by GetCudnnConvAlgo()) unlike conv.cc where cudnnFindConvolutionForwardAlgorithmEx is only called on the OrtCudnnConvAlgoSearchExhaustive level.
This causes uncontrollable output differences on different GPU hardware which makes it impossible to do testing on a set of different systems.
For us it would be ok to gang the settings for forwards and backwards convolutions, but adding a separate field for backwards convolution optimization level would also work, and be more backwards compatible.
To reproduce
Run a DNN containing backwards convolution layer with optimization level DEFAULT and notice that the numerical result varies from run to run on newer hardware, and more so between different GPU generations.
For networks without backwards convolution layers this does not happen for the DEFAULT level.
Urgency
Our project deadline is end of Q1 and we don't know how we will be able to test our complete system on different hardware with this problem messing up our output data consistency.
Platform
Other / Unknown
OS Version
All
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.16.3
ONNX Runtime API
C++
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: