-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[torchlib] Fix implementation for clamp_max / clamp_min #1765
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1765 +/- ##
==========================================
+ Coverage 75.01% 75.06% +0.05%
==========================================
Files 245 245
Lines 26451 26443 -8
Branches 4826 4824 -2
==========================================
+ Hits 19841 19850 +9
+ Misses 5677 5662 -15
+ Partials 933 931 -2 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not need to support size 0 input? I feel like it's in the test for a reason?
There are tests because the pytorch operator supports that, but in practice I don't see that being meaningful because it's just doing an expand. We can come back to it when it becomes an issue. |
Test Results 24 files ± 0 24 suites ±0 3h 21m 28s ⏱️ + 2m 9s For more details on these failures, see this check. Results for commit dd574ae. ± Comparison against base commit 19f1126. This pull request removes 590 and adds 588 tests. Note that renamed tests count towards both.
This pull request removes 134 skipped tests and adds 133 skipped tests. Note that renamed tests count towards both.
This pull request skips 8 tests.
|
Update clamp_max and clamp_min. Remove support for size-0 inputs to simplify the implementations. Fixed registration to make the operators discoverable.