You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 1, 2023. It is now read-only.
I am using distiller's resnet 56 model for CIFAR. After quantization and retraining using QuantAwareTrainRangeLinearQuantizer, what is the procedure for instantiating an instance of that model and loading the correct layers from the state dictionary. The saved state dictionary contains the floating point values for each layer. The model from distiller does not have the additional layers that the quantized model does. Is the procedure to instantiate the same QuantAwareTrainRangeLinearQuantizer quantizer but with train_with_fp_copy=False, call prepare_model on my model and then restore all layers from the checkpoint's state dictionary excluding the floating point version of the weights?
The text was updated successfully, but these errors were encountered:
mueedurrehman
changed the title
Loading quantized model's with the right state dictionary
Loading quantized models with the right state dictionary
Dec 8, 2020
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I am using distiller's resnet 56 model for CIFAR. After quantization and retraining using QuantAwareTrainRangeLinearQuantizer, what is the procedure for instantiating an instance of that model and loading the correct layers from the state dictionary. The saved state dictionary contains the floating point values for each layer. The model from distiller does not have the additional layers that the quantized model does. Is the procedure to instantiate the same QuantAwareTrainRangeLinearQuantizer quantizer but with train_with_fp_copy=False, call prepare_model on my model and then restore all layers from the checkpoint's state dictionary excluding the floating point version of the weights?
The text was updated successfully, but these errors were encountered: