Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gpu timeout x86 macos #439

Open
darenn1 opened this issue Dec 10, 2024 · 13 comments
Open

Gpu timeout x86 macos #439

darenn1 opened this issue Dec 10, 2024 · 13 comments
Labels
bug Something isn't working

Comments

@darenn1
Copy link

darenn1 commented Dec 10, 2024

What happened?

A bug happened!

Steps to reproduce

  1. step one...
  2. step two...

What OS are you seeing the problem on?

MacOS

Relevant log output

options: {
  "path": "/Users/i/Documents/2024-12-10 10-14-54.wav",
  "lang": "en",
  "verbose": false,
  "n_threads": 4,
  "init_prompt": "",
  "temperature": 0.4,
  "translate": null,
  "max_text_ctx": null,
  "word_timestamps": false,
  "max_sentence_len": 1
}

Caused by:
   0: failed to transcribe
   1: Generic whisper error. Varies depending on the function. Error code: -6

Location:
    core/src/transcribe.rs:303:38
App Version: vibe 2.6.9
Commit Hash: 7ae85a2
Arch: x86_64
Platform: macos
Kernel Version: 14.6.1
OS: macos
OS Version: 14.6.1
Cuda Version: n/a
Models: ggml-medium-q8_0.bin
Default Model: ggml-medium-q8_0.bin"
Cargo features: 


CPU feature detection is not supported on this architecture.


<details>
<summary>logs</summary>


whisper_init_state: failed to load Core ML model from '/Users/i/Library/Application Support/github.com.thewh1teagle.vibe/ggml-medium-encoder.mlmodelc'
error: Caused GPU Timeout Error (00000002:kIOAccelCommandBufferCallbackErrorTimeout)
whisper_full_with_state: failed to encode
whisper_init_state: failed to load Core ML model from '/Users/i/Library/Application Support/github.com.thewh1teagle.vibe/ggml-medium-encoder.mlmodelc'
error: Caused GPU Timeout Error (00000002:kIOAccelCommandBufferCallbackErrorTimeout)
whisper_full_with_state: failed to encode
cmd: "/Applications/vibe.app/Contents/MacOS/../Resources/ffmpeg" "-i" "/var/folders/9v/9yn9yhf946s8b78ts288nvjw0000gn/T/vibe_temp_2024-12-10/gFFsUwc2i0.wav" "-ar" "16000" "-ac" "1" "-c:a" "pcm_s16le" "/var/folders/9v/9yn9yhf946s8b78ts288nvjw0000gn/T/vibe_temp_2024-12-10/2024-12-10 10-14-54.wav" "-hide_banner" "-y" "-loglevel" "error"
whisper_init_state: failed to load Core ML model from '/Users/i/Library/Application Support/github.com.thewh1teagle.vibe/ggml-medium-encoder.mlmodelc'
error: Caused GPU Timeout Error (00000002:kIOAccelCommandBufferCallbackErrorTimeout)
whisper_full_with_state: failed to encode

</details>
@darenn1 darenn1 added the bug Something isn't working label Dec 10, 2024
@thewh1teagle
Copy link
Owner

Hey
I don't have macos with x86 for test
Can you check if whisper.cpp works? See the debug.md in this repo
Or let me know if you need more instructions

@thewh1teagle thewh1teagle changed the title App reports bug Gpu timeout x86 macos Dec 11, 2024
This was referenced Dec 11, 2024
@thewh1teagle
Copy link
Owner

I can try to fix it in Vibe, but I don't have macOS x86 to test it. If I create a fix, can you test it?

@darenn1
Copy link
Author

darenn1 commented Dec 11, 2024

I can try to fix it in Vibe, but I don't have macOS x86 to test it. If I create a fix, can you test it?

yea for sure

@dariusdarwish
Copy link

this is my device: mac mini 2018 3.6 GHz Quad-Core Intel Core i3 Intel UHD Graphics 630 1536 MB

I'm here if you need help for testing.

@dariusdarwish
Copy link

dariusdarwish commented Dec 11, 2024

What happened?
A bug happened!

Steps to reproduce
step one...
step two...
What OS are you seeing the problem on?
MacOS

Relevant log output

options: {
"path": "/Users/i/Documents/2024-12-10 10-14-54.wav",
"lang": "en",
"verbose": false,
"n_threads": 4,
"init_prompt": "",
"temperature": 0.4,
"translate": null,
"max_text_ctx": null,
"word_timestamps": false,
"max_sentence_len": 1
}

Caused by:
0: failed to transcribe
1: Generic whisper error. Varies depending on the function. Error code: -6

Location:
core/src/transcribe.rs:303:38
App Version: vibe 2.6.9
Commit Hash: 7ae85a2
Arch: x86_64
Platform: macos
Kernel Version: 14.6.1
OS: macos
OS Version: 14.6.1
Cuda Version: n/a
Models: ggml-medium-q8_0.bin
Default Model: ggml-medium-q8_0.bin"
Cargo features:

CPU feature detection is not supported on this architecture.

logs
whisper_init_state: failed to load Core ML model from '/Users/i/Library/Application Support/github.com.thewh1teagle.vibe/ggml-medium-encoder.mlmodelc'
error: Caused GPU Timeout Error (00000002:kIOAccelCommandBufferCallbackErrorTimeout)
whisper_full_with_state: failed to encode
whisper_init_state: failed to load Core ML model from '/Users/i/Library/Application Support/github.com.thewh1teagle.vibe/ggml-medium-encoder.mlmodelc'
error: Caused GPU Timeout Error (00000002:kIOAccelCommandBufferCallbackErrorTimeout)
whisper_full_with_state: failed to encode
cmd: "/Applications/vibe.app/Contents/MacOS/../Resources/ffmpeg" "-i" "/var/folders/9v/9yn9yhf946s8b78ts288nvjw0000gn/T/vibe_temp_2024-12-10/gFFsUwc2i0.wav" "-ar" "16000" "-ac" "1" "-c:a" "pcm_s16le" "/var/folders/9v/9yn9yhf946s8b78ts288nvjw0000gn/T/vibe_temp_2024-12-10/2024-12-10 10-14-54.wav" "-hide_banner" "-y" "-loglevel" "error"
whisper_init_state: failed to load Core ML model from '/Users/i/Library/Application Support/github.com.thewh1teagle.vibe/ggml-medium-encoder.mlmodelc'
error: Caused GPU Timeout Error (00000002:kIOAccelCommandBufferCallbackErrorTimeout)
whisper_full_with_state: failed to encode

@dariusdarwish
Copy link

@thewh1teagle Could you give me a beginner ready simple instruction to import Whisper.cpp?

@thewh1teagle
Copy link
Owner

thewh1teagle commented Dec 11, 2024

  1. Open the search (Command + Space) and search for terminal
  2. Press Enter
  3. Paste the following:
cd /tmp
wget https://github.com/thewh1teagle/vibe/raw/main/samples/single.wav -O single.wav
wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-tiny.bin -O ggml-tiny.bin
wget https://github.com/thewh1teagle/vibe/releases/download/v2.6.9/whisper-rs -O whisper-rs
chmod +x ./whisper-rs
reset
./whisper-rs ggml-tiny.bin single.wav
  1. Wait for it few seconds to finish transcribe
  2. Paste the logs here.

Note: you can safely do it I compiled myself the binary and it's hosted on this repository.

@dariusdarwish
Copy link

dariusdarwish commented Dec 11, 2024

@thewh1teagle

dyld[9810]: Library not loaded: @rpath/libwhisper.1.dylib
Referenced from: /private/tmp/whisper-macos-x64 (built for macOS 14.0 which is newer than running OS)
Reason: tried: '/Volumes/Internal/audio/whisper.cpp/build/src/libwhisper.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Volumes/Internal/audio/whisper.cpp/build/src/libwhisper.1.dylib' (no such file), '/Volumes/Internal/audio/whisper.cpp/build/ggml/src/libwhisper.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Volumes/Internal/audio/whisper.cpp/build/ggml/src/libwhisper.1.dylib' (no such file), '/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-cpu/libwhisper.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-cpu/libwhisper.1.dylib' (no such file), '/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-blas/libwhisper.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-blas/libwhisper.1.dylib' (no such file), '/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-metal/libwhisper.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-metal/libwhisper.1.dylib' (no such file), '/Volumes/Internal/audio/whisper.cpp/build/src/libwhisper.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Volumes/Internal/audio/whisper.cpp/build/src/libwhisper.1.dylib' (no such file), '/Volumes/Internal/audio/whisper.cpp/build/ggml/src/libwhisper.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Volumes/Internal/audio/whisper.cpp/build/ggml/src/libwhisper.1.dylib' (no such file), '/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-cpu/libwhisper.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-cpu/libwhisper.1.dylib' (no such file), '/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-blas/libwhisper.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-blas/libwhisper.1.dylib' (no such file), '/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-metal/libwhisper.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/Volumes/Internal/audio/whisper.cpp/build/ggml/src/ggml-metal/libwhisper.1.dylib' (no such file)
zsh: abort ./whisper-macos-x64 -m ggml-tiny-q8_0.bin -f short.wav

@thewh1teagle
Copy link
Owner

thewh1teagle commented Dec 11, 2024

@darenn1

I had mistake in the instructions. try to close the terminal and open again then paste the new command (edited)

@dariusdarwish
Copy link

@thewh1teagle whisper_init_from_file_with_params_no_state: loading model from 'ggml-tiny-q8_0.bin'
whisper_init_with_params_no_state: use gpu = 1
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0
whisper_init_with_params_no_state: dtw = 0
whisper_init_with_params_no_state: backends = 3
whisper_model_load: loading model
whisper_model_load: invalid model data (bad magic)
whisper_init_with_params_no_state: failed to load model
thread 'main' panicked at examples/basic_use.rs:26:10:
failed to load model: InitError
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

@thewh1teagle
Copy link
Owner

thewh1teagle commented Dec 11, 2024

whisper_init_with_params_no_state: use gpu = 1
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0

The error still not relevant, I edited the commands again

@dariusdarwish
Copy link

@thewh1teagle
whisper_init_from_file_with_params_no_state: loading model from 'ggml-tiny.bin'
whisper_init_with_params_no_state: use gpu = 1
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0
whisper_init_with_params_no_state: dtw = 0
whisper_init_with_params_no_state: backends = 3
whisper_model_load: loading model
whisper_model_load: invalid model data (bad magic)
whisper_init_with_params_no_state: failed to load model
thread 'main' panicked at examples/basic_use.rs:26:10:
failed to load model: InitError
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

@thewh1teagle
Copy link
Owner

thewh1teagle commented Dec 11, 2024

whisper_init_from_file_with_params_no_state: loading model from 'ggml-tiny.bin'

Very strange. you can try take the last line from the commands, omit ggml-tiny.ggml and drag and drop some model from models folder (open from vibe settings) drag into the terminal it will paste the full path. then hit Enter again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants