Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting error when trying to use OpenVINOExecution Provider #22405

Open
NickM-27 opened this issue Oct 11, 2024 · 2 comments
Open

Getting error when trying to use OpenVINOExecution Provider #22405

NickM-27 opened this issue Oct 11, 2024 · 2 comments
Labels
ep:OpenVINO issues related to OpenVINO execution provider stale issues that have not been addressed in a while; categorized by a bot

Comments

@NickM-27
Copy link

Describe the issue

We run a service via python in a docker container, this service runs ONNX models including support for the OpenVINO execution provider. The service is started and run inside of S6. We have found that running the model with

import onnxruntime as ort
ort.InferenceSession(/config/model_cache/jinaai/jina-clip-v1/vision_model_fp16.onnx, providers=['OpenVINOExecutionProvider', 'CPUExecutionProvider'], provider_options=[{'cache_dir': '/config/model_cache/openvino/ort', 'device_type': 'GPU'}, {}])

results in the error:

EP Error /onnxruntime/onnxruntime/core/session/provider_bridge_ort.cc:1637 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_openvino.so with error: /usr/local/lib/python3.9/dist-packages/onnxruntime/capi/libopenvino_onnx_frontend.so.2430: undefined symbol: _ZN2ov3Any4Base9to_stringB5cxx11Ev

however, when running the exact same in a python3 shell (as opposed to in the main python process) it appears that it does work correctly. I am hoping to understand if there is any significance to this error and if it might indicate what is going wrong.

To reproduce

import onnxruntime as ort
ort.InferenceSession(/config/model_cache/jinaai/jina-clip-v1/vision_model_fp16.onnx, providers=['OpenVINOExecutionProvider', 'CPUExecutionProvider'], provider_options=[{'cache_dir': '/config/model_cache/openvino/ort', 'device_type': 'GPU'}, {}])

using https://huggingface.co/jinaai/jina-clip-v1/resolve/main/onnx/vision_model_fp16.onnx

Urgency

No response

Platform

Linux

OS Version

Debian

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

onnxruntime-openvino 1.19.*

ONNX Runtime API

Python

Architecture

X64

Execution Provider

OpenVINO

Execution Provider Library Version

openvino 2024.3.*

@github-actions github-actions bot added the ep:OpenVINO issues related to OpenVINO execution provider label Oct 11, 2024
Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Nov 10, 2024
@tete1030
Copy link

tete1030 commented Dec 10, 2024

I encountered the issue as well in frigate 0.15-beta2 docker image, trying to use ONNX with openvino on intel iGPU, and it always fallback to CPU. After upgrading openvino inside frigate container to 2024.4.0, it seems the issue is gone and works pretty well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:OpenVINO issues related to OpenVINO execution provider stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

2 participants