You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Below is an error I'm getting trying to use the exported model in onnx format,
1 - Training and infering on GPU using pytorch works fine,
2 - After getting this error I ensured that the path leads to the correct installation folder of CUDA, I have reisntalled CUDA
3 - I downloaded CUDNN and added it to enviroment variables and PATH with name CUDNN_HOME as a suggestion I found from an older issue regarding the same problem.
4 - I've restarted the comptuer to ensure that new path is indeed loaded,
5 - Used dependency walker on the bin folder of both CUDA and CUDNN, CUDA had a missing dependency for the nvprof.exe which I solved by adding a directory including that library to PATH
6 - I've attempted to Add the CUDNN libraries, binaries and headers directly to the CUDA folder
The conda evironment is made only for this project so its a clean one, so I haven't attempted starting a new one.
To reproduce
from ultralytics import YOLOv10
model = YOLOv10("../models/best_v10n_2.pt", verbose=False)
result = model.val(data="preprocessed_annotations/data.yaml", amp=True)
model.export(format='onnx')
model = YOLOv10("models/best_v10n_2.onnx", verbose=False, task='detect')
model.val(data="data/preprocessed_annotations/data.yaml")
Outputs :
*************** EP Error ***************
EP Error [D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891](file:///D:/a/_work/1/s/onnxruntime/python/onnxruntime_pybind_state.cc:891) onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
when using ['CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Describe the issue
Below is an error I'm getting trying to use the exported model in onnx format,
1 - Training and infering on GPU using pytorch works fine,
2 - After getting this error I ensured that the path leads to the correct installation folder of CUDA, I have reisntalled CUDA
3 - I downloaded CUDNN and added it to enviroment variables and PATH with name CUDNN_HOME as a suggestion I found from an older issue regarding the same problem.
4 - I've restarted the comptuer to ensure that new path is indeed loaded,
5 - Used dependency walker on the bin folder of both CUDA and CUDNN, CUDA had a missing dependency for the nvprof.exe which I solved by adding a directory including that library to PATH
6 - I've attempted to Add the CUDNN libraries, binaries and headers directly to the CUDA folder
The conda evironment is made only for this project so its a clean one, so I haven't attempted starting a new one.
To reproduce
Outputs :
Outputs :
path includes
Urgency
Yes, I have a project submission that is due next thursday
Platform
Windows
OS Version
10.0.19045 Build 19045
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.18.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
CUDA 12.5
The text was updated successfully, but these errors were encountered: