Skip to content

Commit

Permalink
Skip full model shape inference if model > 2GB | feat(optimizer) (#1340)
Browse files Browse the repository at this point in the history
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* #1334
* __->__ #1340
  • Loading branch information
BowenBao authored Apr 5, 2024
1 parent 2c74be7 commit ce8f459
Showing 1 changed file with 9 additions and 3 deletions.
12 changes: 9 additions & 3 deletions onnxscript/optimizer/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,15 @@ def optimize(
)
for _ in range(num_iterations):
if onnx_shape_inference:
model = onnx.shape_inference.infer_shapes(
model, check_type=True, strict_mode=True, data_prop=True
)
if model.ByteSize() < 1024 * 1024 * 1024 * 2:
model = onnx.shape_inference.infer_shapes(
model, check_type=True, strict_mode=True, data_prop=True
)
else:
logger.warning(
"The model size is too large for full model shape inference. "
"Skipping this step."
)

inline_simple_functions(model)
modified = fold_constants(
Expand Down

0 comments on commit ce8f459

Please sign in to comment.