Skip to content

Commit

Permalink
Turn shape type inference strict mode to false in optimizer (#1472)
Browse files Browse the repository at this point in the history
Fix #1443 

In converter/dort, tensors retains their shape and type from PyTorch
models, and it saves us some efforts to infer them all like we did in
torchscript. However, when it comes to symbolic shapes, we still need
ONNX shape type inference. Error is raised when the inferred shape and
type are different from the carried ones. This is rare, but it happens
when a corner case is revealed. For example, in #1443, PyTorch generates
2 outputs with size=0 when native_batch_norm is run with CUDA.

This PR turn off the strict mode in ONNX shape type inference to avoid
crash in optimizer.
  • Loading branch information
titaiwangms authored and justinchuby committed May 1, 2024
1 parent 13689e2 commit e73949a
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion onnxscript/optimizer/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,12 @@ def optimize(
for _ in range(num_iterations):
if onnx_shape_inference:
if model.ByteSize() < 1024 * 1024 * 1024 * 2:
# NOTE: strict mode is disabled because it crashes on the models
# that have different shapes inferred from the model carried shapes.
# The case can be found in:
# https://github.com/microsoft/onnxscript/issues/1443
model = onnx.shape_inference.infer_shapes(
model, check_type=True, strict_mode=True, data_prop=True
model, check_type=True, strict_mode=False, data_prop=True
)
else:
logger.warning(
Expand Down

0 comments on commit e73949a

Please sign in to comment.