[Bug] W16A16 quantization, qdq_error is empty, but W8A8 is normal #21089
Labels
quantization
issues related to quantization
stale
issues that have not been addressed in a while; categorized by a bot
Describe Question:
when I use W16A16 quantization in main branch, I cann't get qdq_error and xmodel_err, because I found that qdq_error and xmodel_err are all empty. I eventually found that model.graph.value_info is same as the input model after the function "load_model_with_shape_infer", so I don't know how to fix it. I debug my code with "mobilenetv2-7.onnx".
The text was updated successfully, but these errors were encountered: