Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

shape_inference.quant_pre_process causes AttributeError: module 'onnx.helper' has no attribute 'make_sequence_value_info' #19323

Open
grazder opened this issue Jan 30, 2024 · 3 comments
Labels
quantization issues related to quantization

Comments

@grazder
Copy link

grazder commented Jan 30, 2024

Describe the issue

I found that there is some incompatibility between onnx and onnxruntime when calling shape_inference.quant_pre_process.

Found that you can reproduce if model has torch.split in code.

So, I've got folowing error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
[<ipython-input-6-cf84b0ccbcba>](https://localhost:8080/#) in <cell line: 25>()
     23 
     24 
---> 25 shape_inference.quant_pre_process(
     26     'model.onnx', 'model.onnx', skip_symbolic_shape=False
     27 )

[/usr/local/lib/python3.10/dist-packages/onnxruntime/quantization/shape_inference.py](https://localhost:8080/#) in quant_pre_process(input_model_path, output_model_path, skip_optimization, skip_onnx_shape, skip_symbolic_shape, auto_merge, int_max, guess_output_rank, verbose, save_as_external_data, all_tensors_to_one_file, external_data_location, external_data_size_threshold)
     69         if not skip_symbolic_shape:
     70             logger.info("Performing symbolic shape inference...")
---> 71             model = SymbolicShapeInference.infer_shapes(
     72                 onnx.load(input_model_path),
     73                 int_max,

[/usr/local/lib/python3.10/dist-packages/onnxruntime/tools/symbolic_shape_infer.py](https://localhost:8080/#) in infer_shapes(in_mp, int_max, auto_merge, guess_output_rank, verbose)
   2816         symbolic_shape_inference._preprocess(in_mp)
   2817         while symbolic_shape_inference.run_:
-> 2818             all_shapes_inferred = symbolic_shape_inference._infer_impl()
   2819         symbolic_shape_inference._update_output_from_vi()
   2820         if not all_shapes_inferred:

[/usr/local/lib/python3.10/dist-packages/onnxruntime/tools/symbolic_shape_infer.py](https://localhost:8080/#) in _infer_impl(self, start_sympy_data)
   2566             known_aten_op = False
   2567             if node.op_type in self.dispatcher_:
-> 2568                 self.dispatcher_[node.op_type](node)
   2569             elif node.op_type in ["ConvTranspose"]:
   2570                 # onnx shape inference ops like ConvTranspose may have empty shape for symbolic input

[/usr/local/lib/python3.10/dist-packages/onnxruntime/tools/symbolic_shape_infer.py](https://localhost:8080/#) in _infer_SplitToSequence(self, node)
   1923 
   1924     def _infer_SplitToSequence(self, node):  # noqa: N802
-> 1925         self._infer_Split_Common(node, helper.make_sequence_value_info)
   1926 
   1927     def _infer_Squeeze(self, node):  # noqa: N802

AttributeError: module 'onnx.helper' has no attribute 'make_sequence_value_info'

Also I found something about it in onnx/tensorflow-onnx#1623

To reproduce

I reproduced it both in colab and on my local computer

import torch
import torch.nn as nn

from onnxruntime.quantization import quantize_dynamic, QuantType, shape_inference

class RandomModel(nn.Module):
  def __init__(self):
    super().__init__()
    self.w = nn.Parameter(torch.ones(1, dtype=torch.float32))

  def forward(self, x: torch.Tensor) -> torch.Tensor:
    x_list = torch.split(x, 2)
    x_cat = torch.cat(x_list)
    a = self.w * x_cat
    return a

model = RandomModel()

model = torch.jit.script(model)

torch.onnx.export(model,
                  (torch.ones(100, dtype=torch.float32)),
                  'model.onnx',
                  verbose=True,
                  input_names=['x'],
                  output_names=['out'],
                  dynamic_axes={"x": {0: "length"}})


shape_inference.quant_pre_process(
    'model.onnx', 'model.onnx', skip_symbolic_shape=False
)
quantize_dynamic('model.onnx', 'model.onnx', weight_type=QuantType.QUInt8)

Causes error above

onnx                             1.15.0
onnxruntime                      1.16.3

Without an split, error does not occur

Urgency

No response

Platform

Linux

OS Version

Ubuntu 22.04.3 LTS

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.16.3

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

@github-actions github-actions bot added the quantization issues related to quantization label Jan 30, 2024
@grazder
Copy link
Author

grazder commented Jan 30, 2024

There is simple workaround:

onnx.helper.make_sequence_value_info = onnx.helper.make_tensor_sequence_value_info

Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Feb 29, 2024
@grazder
Copy link
Author

grazder commented Mar 1, 2024

bump

@github-actions github-actions bot removed the stale issues that have not been addressed in a while; categorized by a bot label Mar 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
quantization issues related to quantization
Projects
None yet
Development

No branches or pull requests

1 participant