You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The model has as input l_x_ tensor of shape 1,3,128,256 and l_k_ tensor of shape 1,1,32,32 and the output is called p_8 with shape 1,3,128,256. The model is uploaded to drive here. It was exported from usrnet repo using dynamo exporter and opset 18.
The following python code helps reproduce the issue:
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Describe the issue
When running my onnx model on C++ in CPU everything works perfectly. However, when running it with the Cuda provider it throws this error:
To reproduce
The model has as input l_x_ tensor of shape 1,3,128,256 and l_k_ tensor of shape 1,1,32,32 and the output is called p_8 with shape 1,3,128,256. The model is uploaded to drive here. It was exported from usrnet repo using dynamo exporter and opset 18.
The following python code helps reproduce the issue:
Urgency
Yes
Platform
Windows
OS Version
11
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.18.0
ONNX Runtime API
C++
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
CUDA 11.6
The text was updated successfully, but these errors were encountered: