Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from yolov8n.enginefailed:Protobuf parsing failed. #21841

Closed
Frank-fjh opened this issue Aug 23, 2024 · 2 comments
Labels
performance issues related to performance regressions

Comments

@Frank-fjh
Copy link

Frank-fjh commented Aug 23, 2024

Describe the issue

I want to use onnxruntime with tensorrt ep but failed, this is my code

from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.export(format="engine") 
sess = ort.InferenceSession('./yolov8n.engine', providers=['TensorrtExecutionProvider'])

I can't load the model but i can use ultralytics.YOLO.predict to run the model with tensorrt.

---------------------------------------------------------------------------
InvalidProtobuf                           Traceback (most recent call last)
Cell In[3], [line 1](vscode-notebook-cell:?execution_count=3&line=1)
----> [1](vscode-notebook-cell:?execution_count=3&line=1) sess = ort.InferenceSession('./yolov8n.engine', 
      [2](vscode-notebook-cell:?execution_count=3&line=2)                             providers=['TensorrtExecutionProvider'])

File c:\Users\TL\anaconda3\envs\3_11\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:419, in InferenceSession.__init__(self, path_or_bytes, sess_options, providers, provider_options, **kwargs)
    [416](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:416) disabled_optimizers = kwargs.get("disabled_optimizers")
    [418](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:418) try:
--> [419](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:419)     self._create_inference_session(providers, provider_options, disabled_optimizers)
    [420](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:420) except (ValueError, RuntimeError) as e:
    [421](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:421)     if self._enable_fallback:

File c:\Users\TL\anaconda3\envs\3_11\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:480, in InferenceSession._create_inference_session(self, providers, provider_options, disabled_optimizers)
    [477](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:477) self._register_ep_custom_ops(session_options, providers, provider_options, available_providers)
    [479](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:479) if self._model_path:
--> [480](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:480)     sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
    [481](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:481) else:
    [482](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:482)     sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)

InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from ./model.trt failed:Protobuf parsing failed.

Can anyone tell me what's wrong with it? Thanks.

To reproduce

Python-3.11.9
torch-2.4.0
CUDA:0 (NVIDIA GeForce RTX 4070 Laptop GPU, 8188MiB)
tenssort-gpu-1.19.0

Urgency

No response

Platform

Windows

OS Version

11

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.19.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

TensorRT

Execution Provider Library Version

CUDA11.4 TensorRT-10.0.1.6 cudnn-8.9.7

Model File

No response

Is this a quantized model?

No

@Frank-fjh Frank-fjh added the performance issues related to performance regressions label Aug 23, 2024
@tianleiwu
Copy link
Contributor

tianleiwu commented Aug 27, 2024

sess = ort.InferenceSession('./yolov8n.engine', providers=['TensorrtExecutionProvider'])

The input file for inference session shall be onnx model, not a TRT engine.

@Frank-fjh
Copy link
Author

sess = ort.InferenceSession('./yolov8n.engine', providers=['TensorrtExecutionProvider'])

The input file for inference session shall be onnx model, not a TRT engine.

Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance issues related to performance regressions
Projects
None yet
Development

No branches or pull requests

2 participants