Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I implement custom operators in python? #19820

Open
nasserdr opened this issue Mar 7, 2024 · 2 comments
Open

How can I implement custom operators in python? #19820

nasserdr opened this issue Mar 7, 2024 · 2 comments
Labels
ep:TensorRT issues related to TensorRT execution provider stale issues that have not been addressed in a while; categorized by a bot

Comments

@nasserdr
Copy link

nasserdr commented Mar 7, 2024

Describe the issue

I trained a model using TAO from NVIDIA and converted this model to ONNX. It appears that there are some custom operators that do not exist in onnxruntime such as:

Because these operators do not exist, I am getting this error:
import onnxruntime as ort sess = ort.InferenceSession('model.onnx')
=>
InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./model.onnx failed:This is an invalid model. In Node, ("proposal", ProposalDynamic, "", -1) : ("sigmoid_output": tensor(float),"convolution_output1": tensor(float),) -> ("proposal_out": tensor(float),) , Error No Op registered for ProposalDynamic with domain_version of 12

NVIDIA support suggests that these should be implemented from scratch with Python so that I get them running with onnxruntime on my Raspberry pi. I am wondering whether there is a workaround to feed this implementation to onnxruntime in python as these plugins are already programmed in C++. It would be really a pitty to reinvent the wheels!!

Thanks

To reproduce

  • Downlaod the model from here.
import onnx
import onnxruntime
model_path = "model_name.onnx"
ort_session = onnxruntime.InferenceSession(model_path)

Urgency

Very urgent as I want to start running an existing experiment next week

Platform

Linux

OS Version

20.04.6

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

bff4f8b

ONNX Runtime API

Python

Architecture

X86

Execution Provider

CUDA

Execution Provider Library Version

No response

@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider ep:TensorRT issues related to TensorRT execution provider labels Mar 7, 2024
@jywu-msft jywu-msft assigned chilo-ms and unassigned chilo-ms Mar 7, 2024
@jywu-msft
Copy link
Member

Hi, your issue mentions TensorRT plugins, CUDA Execution provider, x86 and also Raspberry Pi, so it's not clear to me which platform you are actually targeting.
Those TensorRT plugins can be loaded via onnxruntime TensorRT execution provider , but I'm not sure TensorRT is what you want to experiment with.

@sophies927 sophies927 removed the ep:CUDA issues related to the CUDA execution provider label Mar 14, 2024
Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Apr 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:TensorRT issues related to TensorRT execution provider stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

4 participants