-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ShapeInferenceError] Inferred shape and existing shape differ in dimension 0 #1467
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1467 +/- ##
==========================================
+ Coverage 76.84% 76.88% +0.04%
==========================================
Files 209 209
Lines 22461 22461
Branches 3811 3811
==========================================
+ Hits 17260 17270 +10
+ Misses 4482 4473 -9
+ Partials 719 718 -1 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I wonder if it would help to make the tests conditional on the presence of torch (for managing the CI matrix).
Right. So far, I did not fix onnx-script. I just created the unit test which fails. |
We always have torch so I think we are ok |
import torch | ||
import torch.nn as nn | ||
import torch.onnx | ||
from onnx.inliner import inline_local_functions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from onnx.inliner import inline_local_functions | |
import onnx.inliner |
def inline_function(*args, **kwargs): # pylint: disable=unused-argument | ||
first_model_proto = args[0] | ||
|
||
next_model = inline_local_functions(first_model_proto) | ||
|
||
del first_model_proto.graph.node[:] | ||
del first_model_proto.functions[:] | ||
del first_model_proto.graph.initializer[:] | ||
del first_model_proto.opset_import[:] | ||
first_model_proto.graph.node.extend(next_model.graph.node) | ||
first_model_proto.functions.extend(next_model.functions) | ||
first_model_proto.graph.initializer.extend(next_model.graph.initializer) | ||
first_model_proto.opset_import.extend(next_model.opset_import) | ||
|
||
return first_model_proto |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid nested declarations of functions if it doesn't close over local variables?
# torch.library.impl_abstract was renamed to torch.library.register_fake. | ||
warnings.simplefilter("ignore", (UserWarning, DeprecationWarning)) | ||
|
||
def assertONNX( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would consider a more descriptive name. Does it assert on that the output of the onnx model will match the torch output, for example? Also snake cases for all function names
def make_aot_ort( | ||
dynamic: bool = False, | ||
ort_optimization_level: Optional[str] = None, | ||
) -> tuple: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
) -> tuple: | |
) -> tuple: |
Is the return type accurate?
|
) | ||
|
||
baseline_result = model(*args) | ||
result = compiled_model(*args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xadupre I tried to put everything on CUDA and run the code, but even though compiled_model is on CUDA, this result will not be on CUDA. Do you know why?
Creates a unit test running DORT and showing the following error:
Can be used to replicate issue #1443