Skip to content

Commit

Permalink
[TIDL-4273] Correct documentation errors
Browse files Browse the repository at this point in the history
  • Loading branch information
Varun committed Jul 3, 2024
1 parent 258b705 commit 2b51114
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
2 changes: 1 addition & 1 deletion docs/supported_ops_rts_versions.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
| 3 | TIDL_PoolingLayer | MaxPool<br>AveragePool<br>GlobalAveragePool | MAX_POOL_2D<br>AVERAGE_POOL_2D<br>MEAN | <ul><li>Pooling has been validated for the following kernel sizes: 3x3,2x2s,1x1 with stride 1 and stride 2 (both horizontal and vertical dimensions)</li><li> Max pooling supports 1x1 filters with asymmetric stride</li><li> Max pooling additionally supports 1x2,1x3 filters with a stride of 2 (Along the horizontal direction) & 2x1,3x1 filters with a stride of 2 (Along the vertical direction)</li></ul> |
| 4 | TIDL_EltWiseLayer | Add<br>Mul | ADD<br>MUL | <ul><li>Support for 2 input tensors validated extensively, multiple input tensors have limited validation</li><li>Supports broadcasting of dimensions above width</li></ul> |
| 5 | TIDL_InnerProductLayer | Gemm, MatMul | FULLY_CONNECTED | <ul><li> Broadcast is only supported in channel dimension </li><li>For TDA4VM variable input case, doesn’t support unsigned input </li><li>Higher dimensional matmuls can be realized by reshaping the dimensions higher than 3rd dimension into the 3rd dimension</li></ul>|
| 6 | TIDL_SoftMaxLayer | Softmax | SOFTMAX | <ul><li>Supports 8-bit inputs with 8-bit outputs with axis support for width (axis=-1) for any NxCxHxW tensor</li><li>Supports integer (8/16-bit) to float softmax only for flattened inputs</li></ul> |
| 6 | TIDL_SoftMaxLayer | Softmax | SOFTMAX | <ul><li>Supports 8-bit(/16-bit) inputs with 8-bit(/16-bit) outputs (both input and output are of the same bit-depth) with axis support for width (axis=-1) for any NxCxHxW tensor</li><li>Supports integer (8/16-bit) to float softmax only for flattened inputs</li></ul> |
| 7 | TIDL_Deconv2DLayer | ConvTranspose | TRANSPOSE_CONV | <ul><li>Only 8x8, 4x4 and 2x2 kernel with 2x2 stride is supported. It is recommended to use Resize/Upsample to get better performance. This layer is not supported in 16-bit for AM62A/AM67A</li></ul>|
| 8 | TIDL_ConcatLayer | Concat | CONCATENATION | <ul><li>Concat is supported on channel, height or width axis</li></ul>|
| 9 | TIDL_SliceLayer | Split<br>Slice | NA | <ul><li>Slice is supported on all axes except for the batch axis & only one axis can be sliced per operator</li><li>[Patch merging](./tidl_fsg_vtfr.md#patch-merging) expressed with strided slice will be transformed into a transpose layer</li></ul>|
Expand Down
3 changes: 2 additions & 1 deletion examples/osrt_python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,8 @@ Please refer [Quantization](../../docs/tidl_fsg_quantization.md) for more detail
| Name | Description | Supported values/range | Default values | Option Type | Additional details
|:-------------------|:--------------------------------------------------------|:--------------|:------------|:--------------|:------------------------|
| advanced_options:quantization_scale_type | This option specifies type of quantization style to be used for model quantization | 0 - non-power-of-2,<br> 1 - power-of-2 <br> 3 - TF-Lite pre-quantized model <br> 4 - Asymmetric, Per-channel Quantization | 0 | Model compilation | Refer [Quantization](../../docs/tidl_fsg_quantization.md) for more details |
| advanced_options:pre-quantized_model | This option enables reading of scales and zero points from an ONNX QDQ model and bypasses the need for calibration | 0 - disable, <br> 1 enable | 0 | Model compilation | This impacts only ONNX models, for TF-Lite models quantization_scale_type=3 has the same effect |
| advanced_options:quant_params_proto_path | This option allows you to configure quantization scales manually by specifying the min/max values of outputs | String | "" | Model compilation | Refer to [Quantization Parameters](../../docs/tidl_quantParams.md) for further details |
| advanced_options:prequantized_model | This option enables reading of scales and zero points from an ONNX QDQ model and bypasses the need for calibration | 0 - disable, <br> 1 enable | 0 | Model compilation | This impacts only ONNX models, for TF-Lite models quantization_scale_type=3 has the same effect |
| advanced_options:high_resolution_optimization | This option enables performance optimization for high resolution models | 0 - disable, <br> 1 enable | 0 | Model compilation | |
| advanced_options:add_data_convert_ops | This option embeds input and output format conversions (layout, data type, etc.) as part of model and performs the same in DSP instead of ARM | 0 - disable, <br> 1 - Input format conversion <br> 2 - output format conversion <br> 3 - Input and output format conversion | 0 | Model compilation | This is currently an experimental feature |
| advanced_options:network_name | This option allows the user to set the network name (used for the name of the subgraph being delegated to C7x/MMA). If your model contains a network name, it will get used by default | String | "Subgraph" | Model compilation | |
Expand Down

0 comments on commit 2b51114

Please sign in to comment.