How can I implement custom operators in python? #19820
Labels
ep:TensorRT
issues related to TensorRT execution provider
stale
issues that have not been addressed in a while; categorized by a bot
Describe the issue
I trained a model using TAO from NVIDIA and converted this model to ONNX. It appears that there are some custom operators that do not exist in onnxruntime such as:
Because these operators do not exist, I am getting this error:
import onnxruntime as ort sess = ort.InferenceSession('model.onnx')
=>
InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./model.onnx failed:This is an invalid model. In Node, ("proposal", ProposalDynamic, "", -1) : ("sigmoid_output": tensor(float),"convolution_output1": tensor(float),) -> ("proposal_out": tensor(float),) , Error No Op registered for ProposalDynamic with domain_version of 12
NVIDIA support suggests that these should be implemented from scratch with Python so that I get them running with onnxruntime on my Raspberry pi. I am wondering whether there is a workaround to feed this implementation to onnxruntime in python as these plugins are already programmed in C++. It would be really a pitty to reinvent the wheels!!
Thanks
To reproduce
Urgency
Very urgent as I want to start running an existing experiment next week
Platform
Linux
OS Version
20.04.6
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
bff4f8b
ONNX Runtime API
Python
Architecture
X86
Execution Provider
CUDA
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: