-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GroupNorm Sharded. support #4945
Conversation
yugaoTT
commented
Jan 25, 2024
- height and block sharing
- row major and tile layout
tt_eager/tt_dnn/op_library/groupnorm/kernels/dataflow/writer_unary_sharded_gn.cpp
Outdated
Show resolved
Hide resolved
cae0ead
to
a6c3dbc
Compare
struct GroupNormShardedMultiCoreProgramConfig { | ||
CoreCoord compute_with_storage_grid_size; | ||
MathFidelity math_fidelity; | ||
DataType im_data_format; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
im_data_format
means image_data_format
or is this a typo and should be in_data_format
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is the intermediate data format, for the intermediate cbs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm! minor comments.
could you also add testing for batch = 2?
function_level_defaults, | ||
): | ||
in0_shape = [1, 1, M, K] | ||
in1_shape = [1, 1, K, N] | ||
bias_shape = [1, 1, N] | ||
grid_size = (8, 7) | ||
grid_size = (1, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not using multicore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reverted it back to original code, was a mistake
ttl.tensor.TensorMemoryLayout.HEIGHT_SHARDED, | ||
), | ||
], | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you also test for batch = 2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
7738d7c
to
7815e6e
Compare
951bb88
to
c58898b
Compare
c58898b
to
ff27997
Compare
ff27997
to
da5214e
Compare