You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to quantize a Naslib model, more specifically a model from the nasbench201 search space.
when I use the quantization flow described in examples, it doesn't work. the size of the quantized model is equal to the size of the original model. The search space contains mainly CNN architectures.
Hi @anissbslh currently int8_dynamic_activation_int8_weight only supports quantizing nn.Linear, so if it's mainly CNN it's expected that no layer is quantized right now
I'm trying to quantize a Naslib model, more specifically a model from the nasbench201 search space.
when I use the quantization flow described in examples, it doesn't work. the size of the quantized model is equal to the size of the original model. The search space contains mainly CNN architectures.
quantize_(model_copy, int8_dynamic_activation_int8_weight(), device=device)
The text was updated successfully, but these errors were encountered: