You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For instance WinMLTools enables the optimization of ONNX models by either
converting floating point 32 into a floating point 16 representation (IEEE 754 half), effectively compressing the model by reducing its size in half.
the compression of models represented in floating point 32 into 8-bit integer representations - which yields a disk footprint reduction of up to 75% depending on the model.
Currently, neither are supported by ONNC.
Why it is necessary:
As ONNC is perfectly suited to generate natively executable models targeting MCU's with limited memory constraints, it would be very useful if ONNC supported either one or both methods of model optimization.
How to achieve it:
Supporting floating point 16 representation for inputs and operators, and/or supporting 8-bit integer representations for operators.
The text was updated successfully, but these errors were encountered:
robinvanemden
changed the title
Support for WinMLTools ONNX optimizations
Supports 8bit quantization and half-precision floating point storage
Jun 3, 2020
robinvanemden
changed the title
Supports 8bit quantization and half-precision floating point storage
Support 8bit quantization and half-precision floating point storage
Jun 3, 2020
robinvanemden
changed the title
Support 8bit quantization and half-precision floating point storage
Support 8bit quantization and half-precision floating points
Jun 3, 2020
robinvanemden
changed the title
Support 8bit quantization and half-precision floating points
Support 8bit quantization and half-precision floating point representation
Jun 3, 2020
What to add:
For instance WinMLTools enables the optimization of ONNX models by either
Currently, neither are supported by ONNC.
Why it is necessary:
As ONNC is perfectly suited to generate natively executable models targeting MCU's with limited memory constraints, it would be very useful if ONNC supported either one or both methods of model optimization.
How to achieve it:
Supporting floating point 16 representation for inputs and operators, and/or supporting 8-bit integer representations for operators.
The text was updated successfully, but these errors were encountered: