[Feature Request] Detect TensorRT cache creation #22244
Labels
ep:TensorRT
issues related to TensorRT execution provider
feature request
request for unsupported feature or enhancement
Describe the feature request
We have the problem that creating the
.engine
and.profile
files for TensorRT take too much time and there is no feedback for the user what is actual happening.Therefore I suggest adding a general "get_state" to
onnxruntime.InferenceSession
in order to figure out why it is currently blocked. Like is_processing, is_caching etc.I would be happy with any other solution to detect the "cache creation" for now
Describe scenario use case
Give the user better feedback while creating caches for TensorRT.
The text was updated successfully, but these errors were encountered: