-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix codegen failure #669
base: main
Are you sure you want to change the base?
Fix codegen failure #669
Conversation
83fded8
to
66d0502
Compare
66d0502
to
ddf448c
Compare
forge/forge/tvm_to_python.py
Outdated
@@ -1239,6 +1239,8 @@ def populate_cast_args(graph, nid, compiler_cfg): | |||
node = graph["nodes"][nid] | |||
args = [] | |||
dtype = node["attrs"]["dtype"][0][0] | |||
if dtype == "uint1": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't we use uint1, and have to use bool precisely? Those are the same, but why true/false vs 1/0?
Can we do this in TVM? AFAIK there is the logic that converts dtypes from pytorch to Forge dtypes (and vice verse). It seems like that would be the proper place to do so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It has been changed that way because uint1 is not supported and it was throwing error AttributeError: module 'torch' has no attribute 'uint1'
I will check it out and let you know about the tvm part.
@meenakshiramanathan1 can we please have a proper overview of changes in PR description and how they solve the issue. Also issue contains only repro and the message but fails to explain what is the cause of the issue. |
It has been updated now. Please let me know if any other details are required. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for providing detailed explanation. It really speeds up the review process.
if axis == -1: | ||
output_shape.append(len(ops)) | ||
elif axis < 0: | ||
output_shape.insert(axis + 1, len(ops)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we put elif axis < -1 just to be precise. Also can you add just one line of comment why we use + 1 to handle inserting new axis. This is just to help someone that lands in this part of the code and doesn't have the context...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure @dgolubovicTT , it has been changed now.
if axis == -1: | ||
inp_shape.append(1) | ||
elif axis < 0: | ||
inp_shape.insert(axis + 1, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above...
ddf448c
to
62e0def
Compare
62e0def
to
d84014c
Compare
Fix #506
This PR fixes the issue
AssertionError: Setting a tensor value of incorrect shape: (1, 256, 16, 2, 16) vs torch.Size([1, 256, 16, 16, 2])
, which was caused due to shape Vs eval functions output shape mismatch in stack op.More precisely, shape function output was wrong because insertion was not being done correctly here.
To ensure proper insertion at the right axis, changes with axis has been made in shape and decompose function of stack op in eltwise_nary file.
After these changes, it was failing with
AttributeError: module 'torch' has no attribute 'uint1'
during evaluation of cast op , for which uint1 has been mapped to bool dtype.