Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add transform part of the dq matmul tool chain #21374

Merged
merged 6 commits into from
Jul 20, 2024

Conversation

fajin-corp
Copy link
Contributor

@fajin-corp fajin-corp commented Jul 16, 2024

Description

This is a partial change from fajin/qdqmatmulnbitstoolchain. The original PR is blocked by Web CI failures.

MatMulNBits is a heavily optimized matmul operation. Currently a MatMul can be converted to MatMulNBits to speed up the model inference. However, MatMulNBits is an ORT only op. To make the graph compatible with ONNX ops and utilize MatMulNBits at the same time, we introduce Q/DQ support for MatMulNBits.

To convert MatMul ops in a model to MatMulNBits:

  1. use matmul_4bits_quantizer.py to convert MatMul to DQ + MatMul using QDQ mode.
  2. In ORT session, DQ + MatMul is fused to MatMulNBits

Note

MatMulNBits assume B weight is uint4. When no zp is provided, zp defaults to 8, which is different from DQ. DQ defaults zp to 0 when no zp provided. And DQ supports int4. Therefore some conversions are introduced during DQ + MatMul --> MatMulNBits step.

Perf

Using QDQ format will increase the model initialization time and memory consumption. With current implement, model init time increased from ~4s to ~9s, and memory consumption increased from ~2.8GB to ~4.8GB.
The memory increase is due to

  1. in optimizer, after transpose the B weight, a in-memory tensor proto is created using protobuf's arena.
  2. in finalize step, when saving initializer and prepacking, ORT arena is used to create buffers for initializers.

The memory allocated by arenas cannot be fully deallocated.
If disable ORT arena memory allocation, the memory consumptions of both QDQ format and original format are ~2.2GB.
The time increase is mainly due to multiple memory copy, but can be further optimized.

Motivation and Context

Please see description for details.

@fajin-corp fajin-corp force-pushed the fajin/dqmatmultoolchaintransform branch 3 times, most recently from 070a13b to 542bd42 Compare July 17, 2024 20:57
@fajin-corp fajin-corp force-pushed the fajin/dqmatmultoolchaintransform branch from 542bd42 to e8ce6b9 Compare July 17, 2024 21:27
@fajin-corp
Copy link
Contributor Author

/azp run Linux OpenVINO CI Pipeline

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@fajin-corp fajin-corp merged commit 11bf309 into main Jul 20, 2024
99 checks passed
@fajin-corp fajin-corp deleted the fajin/dqmatmultoolchaintransform branch July 20, 2024 05:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants