Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance] TensorRT builder instantiation is very slow #18071

Open
gedoensmax opened this issue Oct 23, 2023 · 5 comments
Open

[Performance] TensorRT builder instantiation is very slow #18071

gedoensmax opened this issue Oct 23, 2023 · 5 comments
Labels
ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform

Comments

@gedoensmax
Copy link
Contributor

Describe the issue

Acquiring a TRT builder object is very slow and is actually unnecessary work if an engine is already present on disk.
But even if the engine would have to be build the builder instance could be reused if the full graph is applicable for TRT. Not sure about using the same builder instance for multiple compilations though.

To reproduce

I ran a lama like model using onnxruntime_perf_test.
The command line used is:

I believe this issue reproduces with any model I also tried with a simple resnet where the impact vs actual load time is even worse.
I have not tested if this issue is Windows specific, but I would assume that maybe it is less impactful on Linux but should also have significant performance impact.
Below is a screenshot of Superluminal (profiler) with the highlighted sections showing the instantiation call for trt_builder.

image

Urgency

No response

Platform

Windows

OS Version

11

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

61ddb89

ONNX Runtime API

C++

Architecture

X64

Execution Provider

TensorRT

Execution Provider Library Version

TRT 8.6, CUDA 12

Model File

No response

Is this a quantized model?

Unknown

@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform labels Oct 23, 2023
@gedoensmax
Copy link
Contributor Author

@chilo-ms

@chilo-ms
Copy link
Contributor

chilo-ms commented Oct 25, 2023

I had a draft PR to address this issue. Feel free to provide feedback there.
What I did is to make the TRT builder only being instantiated once, currently we can't skip the very first builder instantization but can significantly reduce the time for subsequent access to the builder instance.

Profiling 1:
The current TRT EP will spend around 80% of time instantiating TRT builder in TRT EP's Compile() even though the builder instance already being instantiated before in GetCapability()
Screenshot 2023-10-25 114755

Profiling 2:
As you can see, the time spend in TRT builder in TRT EP's Compile() to get the builder instance is negligible since we get the reference of the builder instance directly.
Screenshot 2023-10-25 184520

Will try to profile/verify the case that the models (FasterRCNN, MaskRCNN...) will be partitioned into multiple TRT EP subgraphs and CUDA EP kernels.
I assume there should be more time reduction since more builder instance needed for that case.

@gedoensmax
Copy link
Contributor Author

Ha, another superluminal user :D
The improvements look great.

chilo-ms added a commit that referenced this issue Nov 16, 2023
The TRT builder instantization is slow (see
[here](#18071)).
In current TRT EP, we instantiate builder object every time we need it.
There are multiple places need the TRT builder so this causes huge
performance overhead.
Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Nov 27, 2023
@gedoensmax
Copy link
Contributor Author

This improved a lot due to #18100 and #18217 will enable completely bypassing the builder. I am fine with closing this. @jywu-msft what do you think ?

@github-actions github-actions bot removed the stale issues that have not been addressed in a while; categorized by a bot label Jan 5, 2024
kleiti pushed a commit to kleiti/onnxruntime that referenced this issue Mar 22, 2024
The TRT builder instantization is slow (see
[here](microsoft#18071)).
In current TRT EP, we instantiate builder object every time we need it.
There are multiple places need the TRT builder so this causes huge
performance overhead.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform
Projects
None yet
Development

No branches or pull requests

2 participants