-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some problems about the onnx-tensorrt source code. #20029
Comments
Could Anyone help me? |
This is a standard onnx model and you should be able to open with Netron. |
That's a good question. |
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details. |
Describe the issue
Hello.
I am learning the onnx-tensorrt code, I have some questions:
I have got the files
But I could not open it in netron.app
so my question is:
is this file a standard onnx mode? and what's the difference beteen this model and my source model
I could see some variant in compile method:
input_map
output_map
input_indexes
output_indexes
I see after the code build the engine, it will look up the input_map to build the input_indexes.
so my question is:
Is there a need to do this search all over again?
Because the tensorrt model is from my orignal onnx model, Shouldn't his two inputs and outputs be the same?
To reproduce
None
Urgency
No response
Platform
Windows
OS Version
WIN10
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
v1.16.3
ONNX Runtime API
C++
Architecture
X64
Execution Provider
TensorRT
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: