Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Training] Could not find an implementation for Cos(7) node #19414

Closed
SunHaoOne opened this issue Feb 5, 2024 · 2 comments
Closed

[Training] Could not find an implementation for Cos(7) node #19414

SunHaoOne opened this issue Feb 5, 2024 · 2 comments
Labels
training issues related to ONNX Runtime training; typically submitted using template

Comments

@SunHaoOne
Copy link

Describe the issue

Hello, I'm working with two separate models: an encoder and a decoder. Individually, exporting either model works without any issues. However, I encountered a problem when trying to export a unified model that integrates both the encoder and the decoder. Below is the structure of my combined model:

class Model():
    def forward(self, x):
        tmp = encoder(x)
        out = decoder(tmp)
        return out
Traceback (most recent call last):
  File "onnx/test_save_all.py", line 66, in <module>
    onnx_model = ort.InferenceSession("qcnet.onnx",
  File "/home/shy/mydiskBig/miniforge3/envs/onnx/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/shy/mydiskBig/miniforge3/envs/onnx/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 463, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for Cos(7) node with name '/encoder/Cos'

I have checked the onnx in netron, but it seems ok. The pytorch code is here:

polyline_inputs = torch.zeros([100,4])
orient_pl = polyline_inputs[:, 2].contiguous()
orient_vector_pl = torch.stack([orient_pl.cos(), orient_pl.sin()], dim=-1)

To reproduce

image

Urgency

No response

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.16.3

PyTorch Version

2.1.1+cu118

Execution Provider

Default CPU

Execution Provider Library Version

cu118

@SunHaoOne SunHaoOne added the training issues related to ONNX Runtime training; typically submitted using template label Feb 5, 2024
@SunHaoOne SunHaoOne changed the title [Training] [Training] Could not find an implementation for Cos(7) node Feb 5, 2024
@xadupre
Copy link
Member

xadupre commented Feb 5, 2024

The error means onnxruntime cannot find an implementation for operator Cos with opset 7 which is the most recent opset for this one. Maybe the input type for the operator Cos happens to be outside the supported list. Are running it on Cpu, Cuda, ...? Can you share the instructions you used to convert your model into ONNX?

@SunHaoOne
Copy link
Author

Hi, @xadupre
Sorry for the late reply. I have tried some different opset versions, but they seem not to work. I am now running it on the CPU, and here are the instructions. By the way, my model includes two parts: an encoder and a decoder. The problem occurs with the cos operator in the encoder. When I change all the cos to sin, the problem does not occur. The input type they are all fp32

import torch
import torch.onnx


model = Model()
model = model.cpu()
locs, heads, pi, x_pl_resued = model(point_inputs,
                                    polyline_inputs,
                                    edge_pl2pl,
                                    agent_inputs)

input_names = ["point_inputs", "polyline_inputs", "edge_pl2pl", "agent_inputs"]

output_names = ["locs", "heads", "pi", "x_pl"]

onnx_filename = "model.onnx"

dummy_input = (point_inputs, polyline_inputs, edge_pl2pl, agent_inputs)

torch.onnx.export(model, dummy_input, onnx_filename, verbose=False,
                 input_names=input_names, output_names=output_names,
                 export_params=True, opset_version=16,
                 keep_initializers_as_inputs=True, do_constant_folding=True)



import onnx
import onnxruntime as ort



model = onnx.load("model.onnx")


onnx_model = ort.InferenceSession("qcnet.onnx",
                                  providers=['CPUExecutionProvider']) 

import numpy as np
inputs = {
    "point_inputs": point_inputs.numpy().astype(np.float32), 
    "polyline_inputs": polyline_inputs.numpy().astype(np.float32),
    "edge_pl2pl": edge_pl2pl.numpy().astype(np.float32),
    "agent_inputs": agent_inputs.numpy().astype(np.float32)
}

outputs = onnx_model.run(None, inputs)

print(outputs)



Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
training issues related to ONNX Runtime training; typically submitted using template
Projects
None yet
Development

No branches or pull requests

3 participants