Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removed unnecessary conditions in op_add and op_mul #7135

Merged
merged 2 commits into from
Dec 3, 2024

Conversation

ckmadhira
Copy link
Contributor

@ckmadhira ckmadhira commented Dec 2, 2024

Summary:
Added scalar function call in op_add and op_mul
Updated undefining of macros in op_quantize
Updated elseif() instead of if() in op_cat

Added scalar function call in op_add and op_mul
Updated undefining of macros in op_quantize
Updated elseif() instead of if() in op_cat

Signed-off-by: [email protected] <[email protected]>
Copy link

pytorch-bot bot commented Dec 2, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7135

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 0ef057f with merge base 2d499b3 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 2, 2024
Copy link
Contributor

@mcremon-meta mcremon-meta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good with a couple linter-related comments!

@@ -37,6 +34,7 @@ Tensor& cat_out(
exec_aten::ArrayRef<Tensor> tensors,
int64_t dim,
Tensor& out) {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please remove new line to make linter happy

@@ -565,6 +565,7 @@ Tensor& quantize_per_tensor_out(
int64_t quant_max,
ScalarType dtype,
Tensor& out) {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please remove new line to make linter happy

@@ -802,6 +802,6 @@ Tensor& quantize_per_token_out(
}

} // namespace native
} // namespace FusionG3
} // namespace G3
} // namespace impl
} // namespace cadence
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: missing new line at end of file

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: missing new line at end of file

All the operator files do not have a new line at the end. Not having a new line at the end is not shown as linter error.

@zonglinpeng
Copy link
Contributor

lgtm, same comment as Matthias :)

@facebook-github-bot
Copy link
Contributor

@mcremon-meta has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@mcremon-meta mcremon-meta merged commit 13a1a30 into pytorch:main Dec 3, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants