Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pow inference Error #19299

Open
oneflyingfish opened this issue Jan 29, 2024 · 2 comments
Open

Pow inference Error #19299

oneflyingfish opened this issue Jan 29, 2024 · 2 comments
Labels
ep:CUDA issues related to the CUDA execution provider stale issues that have not been addressed in a while; categorized by a bot

Comments

@oneflyingfish
Copy link

Describe the issue

We find some error while inference by onnxruntime, such as Pow (onnx). We get Pow(3, 1) is 2.

To reproduce

To reproduce

import onnx
from typing import *
from onnx import helper
from onnx import TensorProto
import numpy as np
import onnxruntime as ort
from onnx import helper, TensorProto


def GetOnnx(shape=[1, 1, 64, 64], save: bool = False) -> onnx.ModelProto:
    # analyse shapes and generate test-onnx
    input1_tensor = helper.make_tensor_value_info(
        'input1_', TensorProto.INT32, shape)
    input2_tensor = helper.make_tensor_value_info(
        'input2_', TensorProto.INT32, shape)
    output_tensor = helper.make_tensor_value_info(
        'output', TensorProto.INT32, shape)

    # create op-Node
    pow_node = helper.make_node('Pow', inputs=['input1_', 'input2_'], outputs=[
                                'output'], name='pow_node')

    # create Graph
    graph_def = helper.make_graph(
        [pow_node],
        'pow_graph',
        [input1_tensor, input2_tensor],
        [output_tensor]
    )

    # create model and set versions
    model_def = helper.make_model(graph_def, producer_name='pow_model', ir_version=8, opset_imports=[
        helper.make_opsetid("", 14)])

    model_def = onnx.shape_inference.infer_shapes(model_def)
    if save:
        onnx.save(model_def, "power.onnx")

    return model_def


def test():
    shape = [6]
    model_def = GetOnnx(shape, True)
    onnx_session = ort.InferenceSession(
        "power.onnx", providers=['CUDAExecutionProvider'])

    # 创建输入数据
    input1 = np.ones(shape, np.int32)*3
    input2 = np.ones(shape, np.int32)

    # 进行推理
    # 进行推理
    output = onnx_session.run(
        None, {'input1_': input1, 'input2_': input2})

    print('Input1:', input1)
    print('Input2:', input2)
    print('output', output[0])


test()

Urgency

No response

Platform

Linux

OS Version

ubuntu 22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.12.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

11.8

@github-actions github-actions bot added the ep:CUDA issues related to the CUDA execution provider label Jan 29, 2024
@yufenglee
Copy link
Member

It is easy to repro @pranavsharma.

Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Feb 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

2 participants