New in this Release
Description | Notes |
---|---|
Support for ONNXRUNTIME 1.15.0 | |
Support for several new operators: TopK, Sqrt, Sin, Pow, Mish, Log, Instance Normalization, HSWISH, Floor, Exp, ERF, AsinH, Asin & Abs | |
Improved support for networks with a large number of operators (>2K) | |
Support for improved latency & weight sparsity | Specific to J722S/AM67A/TDA4AEN platforms |
Fixed in this Release
ID | Description | Affected Platforms |
---|---|---|
TIDL-6871 | Softmax (with output type float) gives incorrect results when axis is set to width and width < 16 | All except AM62 |
TIDL-6865 | Elementwise layers with dimension N1xC1xH1xW1 and N2xC2xH2xW2, gives functionally incorrect output on target, if H1 or H2 is 1 and H1 != H2 and C1 == C2 > 1 | All except AM62 |
TIDL-6485 | Models compiled with option "advanced_options:inference_mode" = 2 and containing a Constant Data layer H > 1 will result in functionally incorrect output | All except AM62 |
TIDL-6473 | Models compiled with option "advanced_options:inference_mode" = 2 and containing a layer running in TIDL_NOT_MULTI_CORE mode followed by Slice layer running in TIDL_MULTI_CORE mode may result in functionally incorrect output in host emulation/target | All except AM62 |
TIDL-6461 | Using "advanced_options:inference_mode" = 2 and "debug_level" >=3 may result in error for debug stitching script for some networks | All except AM62 |
TIDL-6418 | Models compiled with "advanced_options:inference_mode" = 2 compilation option may result in functionally incorrect outputs in if the model has Slice/Reshape layers | All except AM62 |
TIDL-5169 | Dataconvert layer with layout conversion from NCHW->NHWC at the output of network returns TIDLRT_create time error if number of output channels for this layer is equal to one | All except AM62 |
TIDL-5167 | Layers with multiple input may result into functional issue if inputs have different padding in the buffer | All except AM62 |
TIDL-5166 | Matmul layer with A matrix broadcast in channel axis results in crash on target/EVM | All except AM62 |
TIDL-5162 | Memory planning fails for models having batches with broadcast | All except AM62 |
TIDL-4868 | Reshape layer accidentally gets denied with message : "Input volume should be equal to output volume" | All except AM62 |
TIDL-4855 | ONNX Runtime does not report correct copy cycles from get_TI_benchmark_data | All except AM62 |
TIDL-4833 | Networks erroring out with message "tidlReadPerChannelMeanStatistics : Unable to read Per Channel Mean statistics" | All except AM62 |
TIDL-4832 | Networks with GEMM are not correctly getting denied, with the following error towards the end "Gemm layer is not supported in TIDL when bias size != output width" | All except AM62 |
TIDL-4714 | Networks with >1536 operators in a single graph fail to compile | All except AM62 |
TIDL-4460 | Model compilation fails for networks with Transpose layers with following error message : "Failed to Allocate memory record 7 @ space = 17 and size = xxxxxx !!!" | |
TIDL-4367 | Networks with multiple branch where first layer in any one of the branch is a reshape layer gives functionally wrong output | All except AM62 |
TIDL-3928 | Sub operator with variable input get's incorrectly offloaded to C7x and results in an init failure during inference | All except AM62 |
TIDL-3902 | Model compiled with enableHighResOptimization=1 option, with any convolution layer's weights volume plus 192 * number of input channels greater than 224KB(for AM62A/J722S) or 448KB (for all other devices), may result into hang on target | All except AM62 |
TIDL-2947 | Convolution with pad greater than the input width results in incorrect outputs | All except AM62 |
Known Issues
ID | Description | Affected Platforms | Occurrence | Workaround in this release |
---|---|---|---|---|
TIDL-7073 | Running inference on a network with option "advanced_options:inference_mode" = 2 sequentially followed by a network with "advanced_options:inference_mode" = 0 on c7x_2 or greater results in hang on target | All except AM62 | Rare | None |
TIDL-6866 | Using option "advanced_options:output_feature_16bit_names_list" along with "high_resolution_optimization" = 1 and "tensor_bits = 8" results in functionally incorrect output on host emulation/target | All except AM62 | Rare | None |
TIDL-6856 | 3x1 convolution with single input and output channel fails in model compilation | All except AM62 | Rare | None |
TIDL-6469 | partial_init_during_compile fails in host emulation mode | All except AM62 | Frequent | None |
TIDL-6465 | Convolution with Fr=Fc=3 and dilation>8 (for AM62A/J722S) dilation>16 (for other devices) gives wrong output on Host Emulation | All except AM62 | Rare | None |
TIDL-4731 | Fusion of batch norm layer into convolution layer when batchnorm is before convolution can give incorrect results when convolution input has pad | All except AM62 | Rare | None |
TIDL-3865 | Elementwise layers with broadcast along width or height or both and number of channels > 1 produces incorrect outputs on device | All except AM62 | Rare | None |