You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TArray used for broadcast was limited to be within range [0, 8] on onnxruntime 1.16.3, and I wonder if the tensor to be broadcasted can be higher than 8 dim, or if this limit is removed in higher onnxruntime version.
To reproduce
code to generate onnx model:
import torch
import torch.onnx
y = torch.randn(1,1,3,1,1,5,5)
class MinModel(torch.nn.Module):
def forward(self, x):
return torch.min(x, y) # Perform min operation along the last dimension
# Initialize the model
model = MinModel()
# Create dummy input data with the specified shape (1, 3, 3, 3, 3, 5)
dummy_input = torch.randn(1,1,3,1,1,1,1,1,1,1,5,5)
# Export the model to ONNX format
onnx_path = "min_model.onnx"
torch.onnx.export(
model,
dummy_input,
onnx_path,
input_names=["inputs"],
output_names=["outputs"],
opset_version=12
)
print(f"Model has been exported to {onnx_path}")
Inference this onnx model will get this error:
[E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running Min node. Name:'Min_2' Status Message: /onnxruntime_src/onnxruntime/core/providers/cuda/shared_inc/cuda_utils.h:84 void onnxruntime::cuda::TArray<T, capacity>::SetSize(int32_t) [with T = long int; int capacity = 8; int32_t = int] 0 <= size && size <= capacity was false. TArray size must be within range [0, 8]. Actual: 10
Urgency
No response
Platform
Linux
OS Version
centos 7
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.16.3
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
cuda 11.8
The text was updated successfully, but these errors were encountered:
The exporter does not seem to support more than 8 dimensions for a tensor
Actually I did not receive any error from the export process. I got that error when I tried to inference the exported onnx model with onnxruntime-gpu. Seems like there is a TArray size range limit no larger than 8.
Describe the issue
TArray used for broadcast was limited to be within range [0, 8] on onnxruntime 1.16.3, and I wonder if the tensor to be broadcasted can be higher than 8 dim, or if this limit is removed in higher onnxruntime version.
To reproduce
code to generate onnx model:
Inference this onnx model will get this error:
Urgency
No response
Platform
Linux
OS Version
centos 7
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.16.3
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
cuda 11.8
The text was updated successfully, but these errors were encountered: