-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I use onnxruntime-gpu with pypi nvidia-tensorrt package? #13501
Comments
yes, it should work. LD_LIBRARY_PATH should point to tensorrt library location. i.e. where libnvinfer* and other tensorrt .so files reside. |
@jywu-msft thx for advice! ldd failed:
btw i see lots of cuda .so libs, including libcudart.so and libcudnn.so:
All these libs were installed as |
dup with #17537 Please try it again with the latest ONNX Runtime release. The error message should be different now. |
Describe the issue
I want to run
.onnx
model on python withTensorrtExecutionProvider
. Can I usenvidia-tensorrt
python package for it instead of full tensorrt installation, maybe with some additional setting ofLD_LIBRARY_PATH
andCUDA_HOME
env vars?To reproduce
I installed
onnxruntime-gpu==1.13.1
,nvidia-tensorrt==8.4.1.5
inside python3.8 virtual environment. No CUDA or TensorRT installed on pc. GPU is RTX3080, withnvidia-driver==515
Urgency
No response
Platform
Linux
OS Version
Ubuntu 20.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.13.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
TensorRT
Execution Provider Library Version
nvidia-tensorrt 8.4.1.5
The text was updated successfully, but these errors were encountered: