-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[torchlib] Implement quantize/dequantize operators #1732
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1732 +/- ##
==========================================
- Coverage 74.89% 74.63% -0.26%
==========================================
Files 244 245 +1
Lines 26353 26360 +7
Branches 4791 4791
==========================================
- Hits 19738 19675 -63
- Misses 5694 5755 +61
- Partials 921 930 +9 ☔ View full report in Codecov by Sentry. |
Test Results 24 files ± 0 24 suites ±0 1h 56m 56s ⏱️ - 4m 39s For more details on these failures, see this check. Results for commit f600925. ± Comparison against base commit f8ee736. ♻️ This comment has been updated with latest results. |
Initial implementations for the quantization operators defined in https://github.com/pytorch/pytorch/blob/main/torch/ao/quantization/fx/_decomposed.py. Related: pytorch/pytorch#106748
I created a new module called
quantized_decomposed.py
to host all ops that are defined under thequantized_decomposed
namespace seen in pytorch/pytorch#106748. I created functions for the most common linear quantize/dequantize operators.FunctionType
->Callable
in decorators to make them play well with type checkers