Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Documentation] Clearer TRT dependencies #18073

Open
gedoensmax opened this issue Oct 24, 2023 · 5 comments
Open

[Documentation] Clearer TRT dependencies #18073

gedoensmax opened this issue Oct 24, 2023 · 5 comments
Labels
documentation improvements or additions to documentation; typically submitted using template ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider stale issues that have not been addressed in a while; categorized by a bot

Comments

@gedoensmax
Copy link
Contributor

Describe the documentation issue

Is it true that the TRT EP needs the same dependencies as CUDA EP ? The documentation kind of implies that this is the case.
But if TRT is built without CUDA EP the dependencies should be limited to TRT and cudart correct ? Given that would not link against any other library. At least the ORT source should not require any other linking that that.

Page / URL

https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements

@gedoensmax gedoensmax added the documentation improvements or additions to documentation; typically submitted using template label Oct 24, 2023
@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider labels Oct 24, 2023
@RyanUnderhill
Copy link
Member

RyanUnderhill commented Oct 27, 2023

The documentation says this: "Note that it is recommended you also register CUDAExecutionProvider to allow Onnx Runtime to assign nodes to CUDA execution provider that TensorRT does not support."

So it's not required, but recommended to avoid having the CPU provider as the fallback for anything that TensorRT doesn't support.

In terms of building, it appears to depend on cuda/cudnn/cublas:

https://github.com/microsoft/onnxruntime/blob/9c323106735535b6dab6b476648faac0ad185e21/cmake/onnxruntime_providers_tensorrt.cmake#L85C8-L85C8

@gedoensmax
Copy link
Contributor Author

I see thanks. I think we should double check with every TRT release if cudnn and cutlass is really a strict dependency. TRT kind of has dropped the dependency on those in the latest releases. I will check internally how we stand towards that.

Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Nov 26, 2023
@gedoensmax
Copy link
Contributor Author

This is related to #18542.

@github-actions github-actions bot removed the stale issues that have not been addressed in a while; categorized by a bot label Nov 27, 2023
Copy link
Contributor

github-actions bot commented Jan 4, 2024

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Jan 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation improvements or additions to documentation; typically submitted using template ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

2 participants