Skip to content

Commit

Permalink
Update quantization.md for QDQ and dynamic quantization old doc (#20130)
Browse files Browse the repository at this point in the history
- doc suggests that QDQ model are created with dynamic quant, which is
not the case anymore.
- updating and restructuring the doc

### Description
<!-- Describe your changes. -->
- QDQ model format representation doesn't come for dynamic quantization,
but the doc was suggesting.
- May be a couple of years back this support was there but not anymore


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
- Clears misunderstanding that QDQ Onnx models can be created with
dynamic quantization.

#20125
  • Loading branch information
manickavela29 authored Apr 5, 2024
1 parent 4859156 commit 435e9a3
Showing 1 changed file with 10 additions and 5 deletions.
15 changes: 10 additions & 5 deletions docs/performance/model-optimizations/quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,16 @@ Quantization in ONNX Runtime refers to 8 bit linear quantization of an ONNX mode

## ONNX quantization representation format
There are two ways to represent quantized ONNX models:
- Operator-oriented (QOperator). All the quantized operators have their own ONNX definitions, like QLinearConv, MatMulInteger and etc.
- Tensor-oriented (QDQ; Quantize and DeQuantize). This format inserts DeQuantizeLinear(QuantizeLinear(tensor)) between the original operators to simulate the quantization and dequantization process. In Static Quantization, the QuantizeLinear and DeQuantizeLinear operators also carry the quantization parameters. In Dynamic Quantization, a ComputeQuantizationParameters function proto is inserted to calculate quantization parameters on the fly. Models generated in the following ways are in the QDQ format:
- Models quantized by quantize_static or quantize_dynamic API, explained below, with `quant_format=QuantFormat.QDQ`.
- Quantization-Aware training (QAT) models converted from Tensorflow or exported from PyTorch.
- Quantized models converted from TFLite and other frameworks.
- Operator-oriented (QOperator) :
All the quantized operators have their own ONNX definitions, like QLinearConv, MatMulInteger and etc.
- Tensor-oriented (QDQ; Quantize and DeQuantize) :
This format inserts DeQuantizeLinear(QuantizeLinear(tensor)) between the original operators to simulate the quantization and dequantization process.
In Static Quantization, the QuantizeLinear and DeQuantizeLinear operators also carry the quantization parameters.
In Dynamic Quantization, a ComputeQuantizationParameters function proto is inserted to calculate quantization parameters on the fly.
- Models generated in the following ways are in the QDQ format:
1. Models quantized by quantize_static, explained below, with `quant_format=QuantFormat.QDQ`.
2. Quantization-Aware training (QAT) models converted from Tensorflow or exported from PyTorch.
3. Quantized models converted from TFLite and other frameworks.

For the latter two cases, you don't need to quantize the model with the quantization tool. ONNX Runtime can run them directly as a quantized model.

Expand Down

0 comments on commit 435e9a3

Please sign in to comment.