Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I use onnxruntime-gpu with pypi nvidia-tensorrt package? #13501

Closed
feldman-cortica opened this issue Oct 29, 2022 · 3 comments
Closed

Can I use onnxruntime-gpu with pypi nvidia-tensorrt package? #13501

feldman-cortica opened this issue Oct 29, 2022 · 3 comments
Labels
ep:TensorRT issues related to TensorRT execution provider

Comments

@feldman-cortica
Copy link

feldman-cortica commented Oct 29, 2022

Describe the issue

I want to run .onnx model on python with TensorrtExecutionProvider. Can I use nvidia-tensorrt python package for it instead of full tensorrt installation, maybe with some additional setting of LD_LIBRARY_PATH and CUDA_HOME env vars?

To reproduce

I installed onnxruntime-gpu==1.13.1, nvidia-tensorrt==8.4.1.5 inside python3.8 virtual environment. No CUDA or TensorRT installed on pc. GPU is RTX3080, with nvidia-driver==515

(venv) pc@pc:~/_onnx$ python
Python 3.8.10 (default, Jun 22 2022, 20:18:18) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import onnxruntime as ort
>>> ort.get_available_providers()
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
>>> model_path = 'model.onnx'
>>> ort_sess = ort.InferenceSession(model_path, providers=ort.get_available_providers())
2022-10-29 17:12:24.794133746 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:552 CreateExecutionProviderInstance] Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met.
2022-10-29 17:12:24.794153082 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:578 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
>>> ort_sess.get_providers()
['CPUExecutionProvider']

Urgency

No response

Platform

Linux

OS Version

Ubuntu 20.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.13.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

TensorRT

Execution Provider Library Version

nvidia-tensorrt 8.4.1.5

@github-actions github-actions bot added the ep:CUDA issues related to the CUDA execution provider label Oct 29, 2022
@feldman-cortica feldman-cortica changed the title Can I use onnxruntime-gpu with tensorrt with pypi nvidia-tensorrt package? Can I use onnxruntime-gpu with pypi nvidia-tensorrt package? Oct 29, 2022
@snnn snnn added ep:TensorRT issues related to TensorRT execution provider and removed ep:CUDA issues related to the CUDA execution provider labels Oct 29, 2022
@jywu-msft
Copy link
Member

yes, it should work. LD_LIBRARY_PATH should point to tensorrt library location. i.e. where libnvinfer* and other tensorrt .so files reside.
same holds for CUDA libraries. (do you have cuda installed separately? nvidia-tensorrt package wouldn't include cuda)
you can run ldd on libonnxruntime_providers_cuda.so and libonnxruntime_providers_tensorrt.so to confirm all their dependencies can be found.

@feldman-cortica
Copy link
Author

@jywu-msft thx for advice!

ldd failed:

pc@pc:~/_onnx$ ldd venv/lib/python3.8/site-packages/onnxruntime/capi/libonnxruntime_providers_tensorrt.so
Inconsistency detected by ld.so: dl-version.c: 205: _dl_check_map_versions: Assertion `needed != NULL' failed!
pc@pc:~/_onnx$ ldd venv/lib/python3.8/site-packages/onnxruntime/capi/libonnxruntime_providers_cuda.so 
Inconsistency detected by ld.so: dl-version.c: 205: _dl_check_map_versions: Assertion `needed != NULL' failed!

btw i see lots of cuda .so libs, including libcudart.so and libcudnn.so:

pc@pc:~/_onnx$ ls -alh venv/lib/python3.8/site-packages/nvidia/*/lib
venv/lib/python3.8/site-packages/nvidia/cublas/lib:
total 640M
drwxrwxr-x 3 pc pc 4.0K Oct 29 00:15 .
drwxrwxr-x 5 pc pc 4.0K Oct 29 00:15 ..
-rw-rw-r-- 1 pc pc    0 Oct 29 00:15 __init__.py
-rw-rw-r-- 1 pc pc 548M Oct 29 00:15 libcublasLt.so.11
-rw-rw-r-- 1 pc pc  91M Oct 29 00:15 libcublas.so.11
-rw-rw-r-- 1 pc pc 728K Oct 29 00:15 libnvblas.so.11
drwxrwxr-x 2 pc pc 4.0K Oct 29 00:15 __pycache__

venv/lib/python3.8/site-packages/nvidia/cuda_runtime/lib:
total 708K
drwxrwxr-x 3 pc pc 4.0K Oct 29 00:15 .
drwxrwxr-x 5 pc pc 4.0K Oct 29 00:15 ..
-rw-rw-r-- 1 pc pc    0 Oct 29 00:15 __init__.py
-rw-rw-r-- 1 pc pc 664K Oct 29 00:15 libcudart.so.11.0
-rw-rw-r-- 1 pc pc  31K Oct 29 00:15 libOpenCL.so.1
drwxrwxr-x 2 pc pc 4.0K Oct 29 00:15 __pycache__

venv/lib/python3.8/site-packages/nvidia/cudnn/lib:
total 1.1G
drwxrwxr-x 3 pc pc 4.0K Oct 29 00:15 .
drwxrwxr-x 5 pc pc 4.0K Oct 29 00:15 ..
-rw-rw-r-- 1 pc pc    0 Oct 29 00:15 __init__.py
-rw-rw-r-- 1 pc pc 125M Oct 29 00:15 libcudnn_adv_infer.so.8
-rw-rw-r-- 1 pc pc 108M Oct 29 00:15 libcudnn_adv_train.so.8
-rw-rw-r-- 1 pc pc 602M Oct 29 00:15 libcudnn_cnn_infer.so.8
-rw-rw-r-- 1 pc pc  99M Oct 29 00:15 libcudnn_cnn_train.so.8
-rw-rw-r-- 1 pc pc  94M Oct 29 00:15 libcudnn_ops_infer.so.8
-rw-rw-r-- 1 pc pc  72M Oct 29 00:15 libcudnn_ops_train.so.8
-rw-rw-r-- 1 pc pc 147K Oct 29 00:15 libcudnn.so.8
drwxrwxr-x 2 pc pc 4.0K Oct 29 00:15 __pycache__

All these libs were installed as nvidia-tensorrt dependencies. Am I correct it's sufficient to have runtime cuda libraries as in list above for running onnxruntime-gpu with tensorrt?

@snnn
Copy link
Member

snnn commented Sep 21, 2023

dup with #17537

Please try it again with the latest ONNX Runtime release. The error message should be different now.

@snnn snnn closed this as completed Sep 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:TensorRT issues related to TensorRT execution provider
Projects
None yet
Development

No branches or pull requests

3 participants