Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support 8bit quantization and half-precision floating point representation #181

Open
robinvanemden opened this issue Jun 3, 2020 · 0 comments

Comments

@robinvanemden
Copy link

robinvanemden commented Jun 3, 2020

What to add:

For instance WinMLTools enables the optimization of ONNX models by either

  • converting floating point 32 into a floating point 16 representation (IEEE 754 half), effectively compressing the model by reducing its size in half.
  • the compression of models represented in floating point 32 into 8-bit integer representations - which yields a disk footprint reduction of up to 75% depending on the model.

Currently, neither are supported by ONNC.

Why it is necessary:

As ONNC is perfectly suited to generate natively executable models targeting MCU's with limited memory constraints, it would be very useful if ONNC supported either one or both methods of model optimization.

How to achieve it:

Supporting floating point 16 representation for inputs and operators, and/or supporting 8-bit integer representations for operators.

@robinvanemden robinvanemden changed the title Support for WinMLTools ONNX optimizations Supports 8bit quantization and half-precision floating point storage Jun 3, 2020
@robinvanemden robinvanemden changed the title Supports 8bit quantization and half-precision floating point storage Support 8bit quantization and half-precision floating point storage Jun 3, 2020
@robinvanemden robinvanemden changed the title Support 8bit quantization and half-precision floating point storage Support 8bit quantization and half-precision floating points Jun 3, 2020
@robinvanemden robinvanemden changed the title Support 8bit quantization and half-precision floating points Support 8bit quantization and half-precision floating point representation Jun 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant