-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Training] qat #18534
Comments
Is it possible to know more about how you got this error? I assumed you call function create_training_artifacts. Was it a quantized model? |
When I run the qat.py script directly, it reports this error. def create_training_artifacts(model_path, artifacts_dir, model_prefix):
|
it is a quantized model. We use this model for Quantization-Aware Training (QAT) https://github.com/microsoft/onnxruntime/tree/v1.16.2/orttraining/orttraining/test/python/qat_poc_example |
@xll426 QAT in ORT is currently in experimental phase. It is known that the feature is not complete yet. I will find some time to complete to fix the POC. Sorry about your experience. |
https://github.com/microsoft/onnxruntime/pull/19290nshould fix this. Sorry for the late response and fix. |
Describe the issue
RuntimeError: /onnxruntime_src/orttraining/orttraining/core/optimizer/qdq_fusion.cc:25 int onnxruntime::{anonymous}::ReplaceOrCreateZeroPointInitializer(onnxruntime::Graph&, onnxruntime::Node&) zero_point_tensor_int != nullptr was false. Expected: zero point initializer with name input-0_zero_point to be present in the graph. Actual: not found.
To reproduce
_ = mnist_with_loss(*[output.name for output in onnx_model.graph.output])
Urgency
No response
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.16.3
PyTorch Version
1.13.1
Execution Provider
CUDA
Execution Provider Library Version
cuda11.6
The text was updated successfully, but these errors were encountered: