-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support different tts model types. #1541
Conversation
low (quality) -> runs faster. high (quality) -> runs slower
@@ -38,6 +39,36 @@ | |||
"hifigan_multi_scale_multi_period_discriminator": HiFiGANMultiScaleMultiPeriodDiscriminator, # NOQA | |||
} | |||
|
|||
LOW_CONFIG = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The config values are from
https://github.com/rhasspy/piper/blob/master/src/python/piper_train/__main__.py#L68
quantize_dynamic( | ||
model_input=model_filename, | ||
model_output=model_filename_int8, | ||
weight_type=QuantType.QUInt8, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quantizing using quint8 is very slow at run time, so we removed it.
low (quality) -> runs faster. high (quality) -> runs slower
Need to test it before merging.