-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Performance] TensorRT builder instantiation is very slow #18071
Comments
I had a draft PR to address this issue. Feel free to provide feedback there. Profiling 1: Profiling 2: Will try to profile/verify the case that the models (FasterRCNN, MaskRCNN...) will be partitioned into multiple TRT EP subgraphs and CUDA EP kernels. |
Ha, another superluminal user :D |
The TRT builder instantization is slow (see [here](#18071)). In current TRT EP, we instantiate builder object every time we need it. There are multiple places need the TRT builder so this causes huge performance overhead.
This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details. |
This improved a lot due to #18100 and #18217 will enable completely bypassing the builder. I am fine with closing this. @jywu-msft what do you think ? |
The TRT builder instantization is slow (see [here](microsoft#18071)). In current TRT EP, we instantiate builder object every time we need it. There are multiple places need the TRT builder so this causes huge performance overhead.
Describe the issue
Acquiring a TRT builder object is very slow and is actually unnecessary work if an engine is already present on disk.
But even if the engine would have to be build the builder instance could be reused if the full graph is applicable for TRT. Not sure about using the same builder instance for multiple compilations though.
To reproduce
I ran a lama like model using onnxruntime_perf_test.
The command line used is:
I believe this issue reproduces with any model I also tried with a simple resnet where the impact vs actual load time is even worse.
I have not tested if this issue is Windows specific, but I would assume that maybe it is less impactful on Linux but should also have significant performance impact.
Below is a screenshot of Superluminal (profiler) with the highlighted sections showing the instantiation call for
trt_builder
.Urgency
No response
Platform
Windows
OS Version
11
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
61ddb89
ONNX Runtime API
C++
Architecture
X64
Execution Provider
TensorRT
Execution Provider Library Version
TRT 8.6, CUDA 12
Model File
No response
Is this a quantized model?
Unknown
The text was updated successfully, but these errors were encountered: