Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantization failed! The onnxruntime.quantization.quantize_dynamic seems didn't convert to the qint8 .onnx file successfully #21440

Open
gwzzjarvis opened this issue Jul 22, 2024 · 2 comments
Labels
quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot

Comments

@gwzzjarvis
Copy link

gwzzjarvis commented Jul 22, 2024

Describe the issue

before converting

trying to convert qwen2 1.5B to int8onnx file using the code below,originally the size of this fp16 model file is 3GB, and I successfully convert it to FP32(onnx export default) onnx file and tested it, but there are something wrong with the int8 convertion

quantize_dynamic(
        model_input='output_onnx/model.onnx',
        model_output='./output_int/model_int8.onnx',
        # op_types_to_quantize=["Matmul"],
        weight_type=QuantType.QInt8,
    )

model.onnx is an .onnx file with weights files in the same directory. And the total size is about 6GB(4 times of 1.5B params), and the expected size of int8 onnx file should be around 1.5GB.

after converting, the output

root@I1b52bb69840070157e:~/output_int# ls -la
total 6293056
drwxr-xr-x 2 root root 65 Jul 22 17:05 .
drwx------ 1 root root 4096 Jul 22 17:04 ..
-rw-r--r-- 1 root root 11008 Jul 22 17:05 logits_int8.onnx
-rw-r--r-- 1 root root 6444070485 Jul 22 17:05 model_int8.onnx

obviously the size is still the same as FP32 model, I suppose that the reason may related to the file size/param size
then I passed use_external_data_format=True to make it save the weight file separately, but it also didn't work

To reproduce

Convert a decoder-only large model like qwen2 or llama3 may reproduce,
and as I tested, the openai/whisper model(when export encoder and decoder respectively) have no issue like that.

Urgency

There is a deadline 7/24 for my project, anyone know how to solve this?

Platform

Linux

OS Version

ubuntu22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

latest version(1.18.1)

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU, CUDA

Execution Provider Library Version

cuda 12.1 onnxruntime(latest), opset 17

@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider quantization issues related to quantization labels Jul 22, 2024
@gwzzjarvis
Copy link
Author

gwzzjarvis commented Jul 24, 2024

f7d1654a2bb134f72663965f123bd8e
fb5a67289c69723a031ce95b2d80393

LEFT NETRON VIEW: No quantization

RIGHT NETRON VIEW: Should be QMatmul(quantization of matmul op) but Matmul, No quantization

There are two screenshots of my issue, in details, normally the matmul op of the right side graph should be different from the left one because it is quantized, but unfortunately it's not.
The file size(preprocessed model onnx and int8 onnx), as follows, keeps the same.

@gwzzjarvis gwzzjarvis changed the title onnxruntime.quantization.quantize_dynamic seems didn't convert to the qint8 .onnx file successfully Quantization failed! The onnxruntime.quantization.quantize_dynamic seems didn't convert to the qint8 .onnx file successfully Jul 24, 2024
@sophies927 sophies927 removed the ep:CUDA issues related to the CUDA execution provider label Jul 25, 2024
Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Aug 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

2 participants