You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Compilation Errors: When compiling mobilenet_v2_lite_wt-v2_qat-v2-wc8-at8_20231120_model.onnx, the following error was reported:
Input tensor name - x
Output tensor name - 1631
Graph Domain TO version : 18In TIDL_onnxRtImportInit subgraph_name=1631
Layer 0, subgraph id 1631, name=1631
Layer 1, subgraph id 1631, name=x
In TIDL_runtimesOptimizeNet: LayerIndex = 315, dataIndex = 314
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer? Error: Layer 14, /0_4/Conv:/0_4/Conv_output_0 is missing inputs in the network and cannot be topologically sorted
Input 0: /0_4/DequantizeLinear_output_0, dataId=229
After simplifying onnx, the compilation is OK, but the precision is not aligned。
Inference Image::airshow.jpg
Inference QDQ ONNX: 0 17.885122 warplane, military plan
Inference bin:0 18.246786 warplane, military plane
The precision between the QDQ ONNX model and the binary model is not aligned.
Request: Please investigate the cause of the dequantization issues during compilation and help resolve the accuracy misalignment after simplification.
The text was updated successfully, but these errors were encountered:
We encountered abnormal accuracy while compiling a QDQ model on SDK 9.2. Below is the configuration used:
Parameter Configuration:
tidl_tools_path: /SDK92/edgeai-tidl-tools/tidl_tools
artifacts_folder: /SDK92/edgeai-tidl-tools/model-artifacts/mobilenet_v2_lite_wt-v2_qat-v2-wc8-at8_20231120_model/
platform: J7
version: 7.2
tensor_bits: 8
debug_level: 2
max_num_subgraphs: 16
deny_list: ''
deny_list:layer_type: ''
deny_list:layer_name: ''
model_type: ''
accuracy_level: 0
advanced_options:calibration_frames: 1
advanced_options:calibration_iterations: 1
advanced_options:output_feature_16bit_names_list: ''
advanced_options:params_16bit_names_list: ''
advanced_options:mixed_precision_factor: -1
advanced_options:quantization_scale_type: 4
advanced_options:high_resolution_optimization: 0
advanced_options:pre_batchnorm_fold: 1
ti_internal_nc_flag: 1601
advanced_options:activation_clipping: 1
advanced_options:weight_clipping: 1
advanced_options:bias_calibration: 1
advanced_options:add_data_convert_ops: 3
advanced_options:channel_wise_quantization: 0
advanced_options:inference_mode: 0
advanced_options:num_cores: 1
advanced_options:prequantized_model: 1
Problems Encountered:
Input tensor name - x
Output tensor name - 1631
Graph Domain TO version : 18In TIDL_onnxRtImportInit subgraph_name=1631
Layer 0, subgraph id 1631, name=1631
Layer 1, subgraph id 1631, name=x
In TIDL_runtimesOptimizeNet: LayerIndex = 315, dataIndex = 314
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Unable to merge Dequantize upwards - DQ without initializer?
Error: Layer 14, /0_4/Conv:/0_4/Conv_output_0 is missing inputs in the network and cannot be topologically sorted
Input 0: /0_4/DequantizeLinear_output_0, dataId=229
Inference Image::airshow.jpg
Inference QDQ ONNX: 0 17.885122 warplane, military plan
Inference bin:0 18.246786 warplane, military plane
The precision between the QDQ ONNX model and the binary model is not aligned.
Request: Please investigate the cause of the dequantization issues during compilation and help resolve the accuracy misalignment after simplification.
The text was updated successfully, but these errors were encountered: