Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encounter unknown exception in initialize using Openvino EP #19004

Open
yanzhechen opened this issue Jan 4, 2024 · 2 comments
Open

Encounter unknown exception in initialize using Openvino EP #19004

yanzhechen opened this issue Jan 4, 2024 · 2 comments
Labels
ep:OpenVINO issues related to OpenVINO execution provider

Comments

@yanzhechen
Copy link

Describe the issue

I was trying to do the inference with openvino EP on Arc 770 using the unimatch model I converted, but I saw error below.

2024-01-03 23:30:33.951900503 [E:onnxruntime:, inference_session.cc:1790 Initialize] Encountered unknown exception in Initialize()
Traceback (most recent call last):
File "/home/yanzhech/code/unimatch/onnx_test.py", line 21, in
ort_session = ort.InferenceSession("unimatch.onnx", providers=["OpenVINOExecutionProvider"],
File "/home/yanzhech/.conda/envs/OFVI/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/home/yanzhech/.conda/envs/OFVI/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 471, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Encountered unknown exception in Initialize()

I tested that openvino EP can work with a simple model.
I also tested that I model can be run with CUDA EP on Nvidia card as well.

To reproduce

Packages:

onnx 1.15.0
onnxruntime-openvino 1.16.0
onnxscript 0.1.0.dev20231108
opencv-python 4.8.1.78
openvino 2023.1.0
openvino-dev 2023.1.0

Test script:

import onnx
import torch
import onnxruntime as ort 

def to_numpy(tensor):
    if tensor.dim() == 3:
        tensor = torch.unsqueeze(tensor, dim=0)
    return tensor.detach().cpu().numpy()

print(ort.get_available_providers())
print(ort.get_device())
model = onnx.load("unimatch.onnx")
print(onnx.checker.check_model(model, full_check=True))


ort_session = ort.InferenceSession("unimatch.onnx", providers=["OpenVINOExecutionProvider"],
                                          provider_options=[{"device_type":"GPU_FP16"}])


for i in range(10):
    print(f"validation for: {i}")
    x1 = torch.randn(10, 3, 384, 512, requires_grad=False)
    x2 = torch.randn(10, 3, 384, 512, requires_grad=False)
    output = ort_session.run(None, {"input1":to_numpy(x1),"input2": to_numpy(x2)})

Model files:

unimatch.zip

Urgency

No response

Platform

Linux

OS Version

Ubuntu 22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.16.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

OpenVINO

Execution Provider Library Version

onnxruntime-openvino 1.16.0

@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider ep:OpenVINO issues related to OpenVINO execution provider labels Jan 4, 2024
@wschin wschin removed the ep:CUDA issues related to the CUDA execution provider label Jan 5, 2024
@jywu-msft
Copy link
Member

@sfatimar , can you advise? it's using the 1.16 onnxruntime-openvino python package.

@yanzhechen
Copy link
Author

I found a way to bypass this issue for now but I think it should be fixed.

I found another onnx model of RAFT from PINTO_model_zoo and it can run with OpenVINO provider on Arc 770.

The pytorch model I converted before having the dynamic batch size as below:

torch.onnx.export(model, 
                  (x1, x2), 
                  "unimatch.onnx",   
                  export_params=True,        
                  opset_version=14,          
                  do_constant_folding=True,  
                  input_names = ['input1', 'input2'],  
                  output_names = ['output'], 
                  dynamic_axes={'input1' : {0 : 'batch_size'},
                                 'input2' : {0 : 'batch_size'},
                                 'output' : {0 : 'batch_size'}}
                  )

Compared with it, I just comment the dynamic_axes and the onnx model can be run without the unknown exception.
Will we get the dynamic_axes support back?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:OpenVINO issues related to OpenVINO execution provider
Projects
None yet
Development

No branches or pull requests

3 participants