Replies: 1 comment
-
Oh, adding |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
do_tts.py and read.py work on my Mac Studio (M1 Max), but read_fast.py exhibits a common Mac PyTorch/tortoise complaint about being configured to require CUDA. I have encountered this in other PyTorch projects, but I'm unsure where in tortoise to alter just read_fast.py (not underlying tortoise, since do_tts.py and read.py work fine) to address this issue. I would prefer to configure it to use the MPS option, which I have found to be helpful on other PyTorch projects, but I'll settle for CPU if I must. Ultimately, I'm curious if read_fast.py is actually any faster than read.py. I suppose likewise, I would like to know if do_tts.py and read.py use MPS or just CPU, and if they are using CPU, can I ensure they use MPS when possible? I have found some resources on that, but I'm not sure I fully understand how to guarantee that tortoise uses MPS...but first, how do I get read_fast.py running?
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions