Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TArray used for broadcast was limited to be within range [0, 8] on onnxruntime 1.16.3 #21254

Open
CapJunkrat opened this issue Jul 4, 2024 · 3 comments

Comments

@CapJunkrat
Copy link

Describe the issue

TArray used for broadcast was limited to be within range [0, 8] on onnxruntime 1.16.3, and I wonder if the tensor to be broadcasted can be higher than 8 dim, or if this limit is removed in higher onnxruntime version.

To reproduce

code to generate onnx model:

import torch
import torch.onnx

y = torch.randn(1,1,3,1,1,5,5)

class MinModel(torch.nn.Module):
    def forward(self, x):
        return torch.min(x, y)  # Perform min operation along the last dimension

# Initialize the model
model = MinModel()

# Create dummy input data with the specified shape (1, 3, 3, 3, 3, 5)
dummy_input = torch.randn(1,1,3,1,1,1,1,1,1,1,5,5)

# Export the model to ONNX format
onnx_path = "min_model.onnx"
torch.onnx.export(
    model,
    dummy_input,
    onnx_path,
    input_names=["inputs"],
    output_names=["outputs"],
    opset_version=12
)

print(f"Model has been exported to {onnx_path}")

Inference this onnx model will get this error:

[E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running Min node. Name:'Min_2' Status Message: /onnxruntime_src/onnxruntime/core/providers/cuda/shared_inc/cuda_utils.h:84 void onnxruntime::cuda::TArray<T, capacity>::SetSize(int32_t) [with T = long int; int capacity = 8; int32_t = int] 0 <= size && size <= capacity was false. TArray size must be within range [0, 8]. Actual: 10

Urgency

No response

Platform

Linux

OS Version

centos 7

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.16.3

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

cuda 11.8

@github-actions github-actions bot added the ep:CUDA issues related to the CUDA execution provider label Jul 4, 2024
@xadupre
Copy link
Member

xadupre commented Jul 5, 2024

It is a converting issue. The exporter does not seem to support more than 8 dimensions for a tensor. Do you need 10?

@xadupre xadupre added the converter related to ONNX converters label Jul 5, 2024
@CapJunkrat
Copy link
Author

The exporter does not seem to support more than 8 dimensions for a tensor

Actually I did not receive any error from the export process. I got that error when I tried to inference the exported onnx model with onnxruntime-gpu. Seems like there is a TArray size range limit no larger than 8.

@sophies927 sophies927 removed the ep:CUDA issues related to the CUDA execution provider label Jul 11, 2024
@gramalingam
Copy link
Contributor

It is a converting issue. The exporter does not seem to support more than 8 dimensions for a tensor. Do you need 10?

The error message is from onnxruntime ... why do you say it is a converter issue?

@titaiwangms titaiwangms removed the converter related to ONNX converters label Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants