-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MAC can't find libonnxruntime_providers_shared.dylib #17532
Comments
I've built onnxruntime from source on a mac and it doens't produce the library |
onnxruntime_providers_coreml is a static library. It should not need to use libonnxruntime_providers_shared.dylib. |
@skottmckay could you please help? |
It looks like there is some code buried somewhere that calls |
In onnxruntime_providers.cmake we have: if (NOT onnxruntime_MINIMAL_BUILD AND NOT onnxruntime_EXTENDED_MINIMAL_BUILD
AND NOT ${CMAKE_SYSTEM_NAME} MATCHES "Darwin|iOS"
AND NOT CMAKE_SYSTEM_NAME STREQUAL "Android"
AND NOT CMAKE_SYSTEM_NAME STREQUAL "Emscripten")
onnxruntime_add_shared_library(onnxruntime_providers_shared ...)
endif() So the library would never be built on macOS. However, in setup.py we have things like: libs.extend(["libonnxruntime_providers_shared.dylib"])
libs.extend(["libonnxruntime_providers_dnnl.dylib"])
libs.extend(["libonnxruntime_providers_tensorrt.dylib"])
libs.extend(["libonnxruntime_providers_cuda.dylib"]) They probably would not work. I know macOS doesn't have CUDA and certainly not TensorRT. But, how about dnnl? @jywu-msft , should we remove this code? Or, should we make it work on macOS? @RyanUnderhill , is there a reason why building onnxruntime_providers_shared library is disabled on macOS? |
Is there a reason why it only looks for it when using CoreML but doesn't when using CPU only ? |
Is CoreML officially supported or is it a beta feature ? |
Since you built it from source, can you setup a break point at onnxruntime/core/session/provider_bridge_ort.cc:1080 and give us the call stack? |
I'll have to rebuild in Debug but sure will do |
Thank you! |
|
Is this helpful ? |
Ok, this is all my fault. My code was accidentally trying to use CUDA. My code chooses an execution provider at runtime depending on some args. I had some arguments the wrong way round which basically went down the CUDA path. My fault. It all works now. Sorry for the confusion. |
Describe the issue
I'm trying to run a model using CoreML. Some of the ops aren't supported and I guess is trying to run parts of the model on CPU or something. It tries to call
dlopen(libonnxruntime_providers_shared.dylib)
and fails.The full trace:
Interestingly, if I run on CPU only, I don't get this error...
To reproduce
I'm running yolov5n.onnx
Urgency
ASAP
Platform
Mac
OS Version
Ventura 13.2.1
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.15.1
ONNX Runtime API
C++
Architecture
ARM64
Execution Provider
CoreML
Execution Provider Library Version
CoreML 1.0.0
The text was updated successfully, but these errors were encountered: