Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Training] Whether to support weight per_channel QAT #19241

Open
hbwx24 opened this issue Jan 23, 2024 · 4 comments
Open

[Training] Whether to support weight per_channel QAT #19241

hbwx24 opened this issue Jan 23, 2024 · 4 comments
Labels
ep:CUDA issues related to the CUDA execution provider quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot training issues related to ONNX Runtime training; typically submitted using template

Comments

@hbwx24
Copy link

hbwx24 commented Jan 23, 2024

Describe the issue

The model weight is quantified per channel:

  • weight_scale.shape=[64,],
  • zero_point.shape=[64].
    When using onnxruntime-train to do QAT, the following error is reported. Does onnxruntime-train support per_channel QAT?
onnxruntime                1.16.3
onnxruntime-extensions     0.9.0
onnxruntime-gpu            1.16.3
onnxruntime-training       1.16.3


onnxruntime/orttraining/orttraining/training_api/module.cc:538 onnxruntime::common::Status onnxruntime::training::api::Module::TrainStep(const std::vector<OrtValue>&, std::vector<OrtValue>&) [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running FakeQuant node. Name:'FakeQuant_token_260' Status Message: /home/xin.wei/workdir/quant/onnx2torch/onnxruntime-source/orttraining/orttraining/training_ops/cpu/quantization/fake_quant.cc:68 onnxruntime::common::Status onnxruntime::contrib::FakeQuant<T>::Compute(onnxruntime::OpKernelContext*) const [with T = float] IsScalarOr1ElementVector(scale) was false. Quantization scale must be a scalar or 1D tensor of size 1.

To reproduce

The model weight is quantified per channel:

  • weight_scale.shape=[64,],
  • zero_point.shape=[64].
    When using onnxruntime-train to do QAT, the following error is reported. Does onnxruntime-train support per_channel QAT?
onnxruntime                1.16.3
onnxruntime-extensions     0.9.0
onnxruntime-gpu            1.16.3
onnxruntime-training       1.16.3


orttraining/orttraining/training_api/module.cc:538 onnxruntime::common::Status onnxruntime::training::api::Module::TrainStep(const std::vector<OrtValue>&, std::vector<OrtValue>&) [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running FakeQuant node. Name:'FakeQuant_token_260' Status Message: /home/xin.wei/workdir/quant/onnx2torch/onnxruntime-source/orttraining/orttraining/training_ops/cpu/quantization/fake_quant.cc:68 onnxruntime::common::Status onnxruntime::contrib::FakeQuant<T>::Compute(onnxruntime::OpKernelContext*) const [with T = float] IsScalarOr1ElementVector(scale) was false. Quantization scale must be a scalar or 1D tensor of size 1.

Urgency

No response

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.16.3

PyTorch Version

1.10

Execution Provider

Default CPU, CUDA

Execution Provider Library Version

No response

@hbwx24 hbwx24 added the training issues related to ONNX Runtime training; typically submitted using template label Jan 23, 2024
@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider quantization issues related to quantization labels Jan 23, 2024
@baijumeswani
Copy link
Contributor

@hbwx24 Thanks for trying out QAT. QAT with ONNX Runtime is in experimental stage at this time.

Looking through my own TODOs in the repository, it seems like per channel QAT is not supported yet.

I don't know if I can commit to having this feature completed soon, but I will try to address this feature before the next ONNX Runtime release (1.18).

@hbwx24
Copy link
Author

hbwx24 commented Jan 24, 2024

@hbwx24 Thanks for trying out QAT. QAT with ONNX Runtime is in experimental stage at this time.

Looking through my own TODOs in the repository, it seems like per channel QAT is not supported yet.

I don't know if I can commit to having this feature completed soon, but I will try to address this feature before the next ONNX Runtime release (1.18).

Thank you very much

@hbwx24 hbwx24 closed this as not planned Won't fix, can't repro, duplicate, stale Jan 24, 2024
@hbwx24 hbwx24 reopened this Jan 24, 2024
@hbwx24 hbwx24 closed this as completed Jan 24, 2024
@hbwx24 hbwx24 reopened this Jan 24, 2024
@hbwx24 hbwx24 changed the title [Training] [Training] Whether to support per_channel QAT Jan 24, 2024
@hbwx24 hbwx24 changed the title [Training] Whether to support per_channel QAT [Training] Whether to support weight per_channel QAT Jan 24, 2024
@yzg216
Copy link

yzg216 commented Jan 24, 2024

I have also encountered this problem, and I am anxious to use it. If I develop it myself, can you tell me how to fix it? @baijumeswani

Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Feb 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot training issues related to ONNX Runtime training; typically submitted using template
Projects
None yet
Development

No branches or pull requests

3 participants