You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The first time a model is used it has to be downloaded, which can take time, after that the model should be cached on disk.
The amount of time it takes to inference speech is dependent on the size of the model and the hardware it is running on. If you don't have a GPU or your computer is not very high end, it will take longer.
For example: If I say I am XYZ. It takes almost above mentioned time to deliver it. How to speed this thing up?
Also, Why does it take unexpected time to load the model?
@davabase
The text was updated successfully, but these errors were encountered: