We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I want to use onnxruntime with tensorrt ep but failed, this is my code
from ultralytics import YOLO model = YOLO("yolov8n.pt") model.export(format="engine") sess = ort.InferenceSession('./yolov8n.engine', providers=['TensorrtExecutionProvider'])
I can't load the model but i can use ultralytics.YOLO.predict to run the model with tensorrt.
--------------------------------------------------------------------------- InvalidProtobuf Traceback (most recent call last) Cell In[3], [line 1](vscode-notebook-cell:?execution_count=3&line=1) ----> [1](vscode-notebook-cell:?execution_count=3&line=1) sess = ort.InferenceSession('./yolov8n.engine', [2](vscode-notebook-cell:?execution_count=3&line=2) providers=['TensorrtExecutionProvider']) File c:\Users\TL\anaconda3\envs\3_11\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:419, in InferenceSession.__init__(self, path_or_bytes, sess_options, providers, provider_options, **kwargs) [416](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:416) disabled_optimizers = kwargs.get("disabled_optimizers") [418](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:418) try: --> [419](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:419) self._create_inference_session(providers, provider_options, disabled_optimizers) [420](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:420) except (ValueError, RuntimeError) as e: [421](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:421) if self._enable_fallback: File c:\Users\TL\anaconda3\envs\3_11\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:480, in InferenceSession._create_inference_session(self, providers, provider_options, disabled_optimizers) [477](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:477) self._register_ep_custom_ops(session_options, providers, provider_options, available_providers) [479](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:479) if self._model_path: --> [480](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:480) sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) [481](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:481) else: [482](file:///C:/Users/TL/anaconda3/envs/3_11/Lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:482) sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model) InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from ./model.trt failed:Protobuf parsing failed.
Can anyone tell me what's wrong with it? Thanks.
Python-3.11.9 torch-2.4.0 CUDA:0 (NVIDIA GeForce RTX 4070 Laptop GPU, 8188MiB) tenssort-gpu-1.19.0
No response
Windows
11
Released Package
1.19.0
Python
X64
TensorRT
CUDA11.4 TensorRT-10.0.1.6 cudnn-8.9.7
No
The text was updated successfully, but these errors were encountered:
sess = ort.InferenceSession('./yolov8n.engine', providers=['TensorrtExecutionProvider'])
The input file for inference session shall be onnx model, not a TRT engine.
Sorry, something went wrong.
sess = ort.InferenceSession('./yolov8n.engine', providers=['TensorrtExecutionProvider']) The input file for inference session shall be onnx model, not a TRT engine.
Thanks a lot!
No branches or pull requests
Describe the issue
I want to use onnxruntime with tensorrt ep but failed, this is my code
I can't load the model but i can use ultralytics.YOLO.predict to run the model with tensorrt.
Can anyone tell me what's wrong with it? Thanks.
To reproduce
Python-3.11.9
torch-2.4.0
CUDA:0 (NVIDIA GeForce RTX 4070 Laptop GPU, 8188MiB)
tenssort-gpu-1.19.0
Urgency
No response
Platform
Windows
OS Version
11
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.19.0
ONNX Runtime API
Python
Architecture
X64
Execution Provider
TensorRT
Execution Provider Library Version
CUDA11.4 TensorRT-10.0.1.6 cudnn-8.9.7
Model File
No response
Is this a quantized model?
No
The text was updated successfully, but these errors were encountered: