You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I transformed belle-whisper-large-v2 by ctranslate2, the model size is almost same as faster-whisper-large-v2. But when the word_timestamp parameter is True, Belle took much more time(at least 3x, sometimes 10x) than the faster-whisper model. Is it normal?
I translate the model by the following command: ct2-transformers-converter --model .\Belle-whisper-large-v2-zh\ --output_dir faster-belle-whisper-large-v2-zh --copy_files preprocessor_config.json --quantization float16
The text was updated successfully, but these errors were encountered:
it confused me. Belle-whisper is exactly same to whisper on model framework.
BTW, check the output length of belle-whisper and faster-whisper, maybe the length difference make the speed gap
I transformed belle-whisper-large-v2 by ctranslate2, the model size is almost same as faster-whisper-large-v2. But when the word_timestamp parameter is True, Belle took much more time(at least 3x, sometimes 10x) than the faster-whisper model. Is it normal?
I translate the model by the following command:
ct2-transformers-converter --model .\Belle-whisper-large-v2-zh\ --output_dir faster-belle-whisper-large-v2-zh --copy_files preprocessor_config.json --quantization float16
The text was updated successfully, but these errors were encountered: