Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Op (Reshape) [ShapeInferenceError] Target shape may not have multiple -1 dimensions. #18969

Closed
Arthur-Lee opened this issue Jan 1, 2024 · 2 comments
Labels
ep:CUDA issues related to the CUDA execution provider platform:windows issues related to the Windows platform

Comments

@Arthur-Lee
Copy link

Describe the issue

The original onnx model doesn't support dynamic batch, So I just found a way to support dynamic batch. However the converted model occurs the error:

Traceback (most recent call last):
File "c:\Users\Arthu\Codes\AzureDevOps\ModelServing\sources\kserve\face-recognition\face-recognition\test_model.py", line 52, in
test_model()
File "c:\Users\Arthu\Codes\AzureDevOps\ModelServing\sources\kserve\face-recognition\face-recognition\test_model.py", line 32, in test_model
session = ort.InferenceSession(model_files[0], providers=providers)
File "C:\Users\Arthu\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\Local\pypoetry\Cache\virtualenvs\kserve-46fEio5B-py3.11\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\Users\Arthu\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\Local\pypoetry\Cache\virtualenvs\kserve-46fEio5B-py3.11\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 472, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from C:\Users\Arthu\Codes\AzureDevOps\ModelServing\sources\models\face-recognition\facenet-casia-webface\facenet_casia-webface_resnet.onnx failed:Node (Reshape_358) Op (Reshape) [ShapeInferenceError] Target shape may not have multiple -1 dimensions.

To reproduce

The original onnx model: https://drive.google.com/file/d/1IcR-_4d21s3gjgd5jmOhwBlK_VimYpdx/view?usp=sharing

Tried way 1:

  1. Following the link Changing Batch SIze onnx/onnx#2182 (comment) to support the dynamic batch.
  2. Load the converted onnx model and inference, the error will be observed.

Tried way 2:

  1. Following the link https://github.com/onnx/onnx/blob/main/docs/PythonAPIOverview.md#converting-version-of-an-onnx-model-within-default-domain-aionnx, to convert to 20 opset version.
  2. The converted model support dynamic batch. However load the model and inference, the error will be observed.

Urgency

No response

Platform

Windows

OS Version

Version 23H2

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.17.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

No response

@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider platform:windows issues related to the Windows platform labels Jan 1, 2024
@Arthur-Lee
Copy link
Author

I just found the root cause. The shape in OnnxReshape is (1, -1), I changed it to (-1, ftr_dim) and the issue has been resolved.
image

@skottmckay
Copy link
Contributor

FWIW this is documented in the ONNX spec. https://github.com/onnx/onnx/blob/main/docs/Operators.md#Reshape

You can't arbitrarily change another dimension of the target shape to be -1 if it already has a -1 in it, as the -1 is essentially a wildcard where the value is inferred to make the number of elements in the output shape match the input shape. That's not possible if there are multiple -1's - e.g. if there are 2 x -1 entries and the inferred value is 3 there's no defined way to split the 3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider platform:windows issues related to the Windows platform
Projects
None yet
Development

No branches or pull requests

2 participants