Loading .onnx model on AWS Lambda image returns error. #18860
Labels
ep:OpenVINO
issues related to OpenVINO execution provider
stale
issues that have not been addressed in a while; categorized by a bot
Describe the issue
I'm trying to load a .onnx model, and then do inference with it. When loading the model from the .onnx file I get 'Failed to find location of the openvino_telemetry file.' error.
To reproduce
Docker file I use to build the image:
The code in app.py:
The error I get:
How can I fix this issue? Thanks!
Urgency
No response
Platform
Linux
OS Version
Amazon Linux
ONNX Runtime Installation
Other / Unknown
ONNX Runtime Version or Commit ID
I don't know
ONNX Runtime API
Python
Architecture
Other / Unknown
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: