-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running on Macos? #64
Comments
It does run on macOS. However you have to use python3 so you have to install python3 via |
thank you I will try this and come back to you again if problems come up |
I couldn't figure out: what are supported |
Just use |
does this only work for mp3? I have m4a file recorded on my phone. Also, where do I need to exceute the code, whisper-ctranslate2 (file name) --model large |
when I run the command on the folder where the recording is located, it says no command of whisper-ctranslate2 is found |
If you installed You have to install It doesn't matter which audio is of any particular format, whisper-ctranslate2 automatically converts to 16KHz mono audio internally for the inference engine to transcribe. You can also specify the path to where the audio file is located by prepending the absolute path to the filename. Where you launch the command If the instructions are too much for you, you could look at this macos app called WhisperScript it uses internally the code from faster-whisper which is also based on whisper-ctranslate2. |
You'll have to do Are you on a M1 or M2 machine? |
do you think it installed properly? |
Yes, it is installed correctly. However you have outdated brew formulae which means you have to do |
Sorry, that error is coming from trying to build From what I see across the projects that do the whisper ai models for speech to text inference transcription like whisper.cpp/faster-whisper/whisper-ctranslate2, the code is running on Intel based macs primarily. Your best chance to have it running is to try the app called WhisperScript, it uses the same code as faster-whisper which also is based off of whisper-ctranslate2. The link for the app is in one of my above replies. WhisperScript will run natively on arm based macs. |
ok thank you. So Whisper script is more or less the same thing as what I am trying to do here? |
Yes, it's a GUI based app. You don't need anymore to use the terminal. |
@dazzng Wasn't you running faster-whisper from my repo? This is practically the same thing, it won't work on your GPU. |
yeah I was just trying alternatives. |
I have an M1 and use whisper.cpp with large and medium Core ML models. First time large is run it takes 15 1/2 hours till it starts and first time medium takes 2 to 3 hours if I remember correctly. I didn't want to compile them with XCode and get a developer account so just downloaded precomipled models but still takes that extra "first time run". I get around 1.7x realtime speed on large and about 3.5x realtime speed with medium. I use -ng (no graphics) for large since I only got 8GB RAM and it just goes a tad slower but at least I can still use the laptop.
|
Can you run this on Macos?
If so, where do I type this code: pip install -U whisper-ctranslate2
Can I open a terminal inside the folder where the files are located and run this code or do I have to install phyton?
The text was updated successfully, but these errors were encountered: