-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TTS model doesn't fit into Jetson Nano 2GB #19
Comments
I think I'm having the same problem with the Jetson Nano 4GB - just crashes on the onnx.InferenceSession() call. Do you need the 8GB Nano to run the standard TTS models? Maybe I can find a smaller model. |
I don't think there is a 8GB Nano version out there :) |
Oh. I guess that 8gb ORIN Jetson thing is a SOM packaged by a 3rd party into board. Ok - thanks. I will dig deeper. Was going to switch to pyttsx3 but I will keep going with the jetson-voice. Those onnx models are a lot bigger than the asr models. I get a "process killed" message and it stops after the InferenceSession call. I'm still in the docker. Learning so much from all this modern architecture! Cool stuff. Love how all the papers are referenced etc. |
Well, I got jetson-voice TTS working on the Jetson Nano 4GB, but is has some drawbacks. It's not very reliable and the output for some texts is pretty bad. I eventually switched to https://github.com/coqui-ai/TTS, which still works in the Jetson Nano python environment in the 0.6.1 version. It's a bit slower, but still realtime and even works on the Jetson Nano 2GB. I use these parameters: |
OK, I just looked up pyttsx3. It is just a wrapper around espeak. You might as well use that directly. It sounds pretty terrible though in my opinion. |
I experienced issues with running TTS on the Jetson Nano and the same with QuartzNet. It simply froze the system. But MatchboxNet and MarbleNet which are smaller models work fine. For TTS I will try coqui-ai as I have used it in the past. |
I noticed that the provided fastpitch_hifigan model doesn't work with 2GB of RAM. Is anyone aware of a smaller model in NEMO that I can try to convert?
I also tried to run the model with TensorRT instead of the default onnxruntime, but some bugs in TensorRT prevent this.
The text was updated successfully, but these errors were encountered: