Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

limit pytorch version to cudnn8 for pip install #958

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

NewUserHa
Copy link

Note: Version 9+ of nvidia-cudnn-cu12 appears to cause issues due its reliance on cuDNN 9 (Faster-Whisper does not currently support cuDNN 9). Ensure your version of the Python package is for cuDNN 8.

because all pytorch>=2.4 on conda are started to be compiled with cudnn9,
so this PR can avoid new users who just installed with pip install faster-whisper from getting into the issue:

>>Performing transcription...
Could not locate cudnn_ops_infer64_8.dll. Please make sure it is in your library path!

just after installing.

@BBC-Esq
Copy link
Contributor

BBC-Esq commented Aug 12, 2024

I personally don't know why faster-whisper isn't compatible with cudnn 9+ yet, but if for some reason they don't want to support it they can add something like this to their code:

1. Add Nvidia cudnn to the installation process - e.g.:

pip install nvidia-cudnn-cu12==8.9.7.29

2. In the entry point for the library add to it the following, which will add to the paths (but not replace them) :

  • Tip: You can modify "CUDA_PATH_V1_1" to another version of cuda if you want as well.
def set_cuda_paths():
    try:
        venv_base = Path(sys.executable).parent
        nvidia_base_path = venv_base / 'Lib' / 'site-packages' / 'nvidia'
        for env_var in ['CUDA_PATH', 'CUDA_PATH_V12_1', 'PATH']:
            current_path = os.environ.get(env_var, '')
            os.environ[env_var] = os.pathsep.join(filter(None, [str(nvidia_base_path), current_path]))
        logging.info("CUDA paths set successfully")
    except Exception as e:
        logging.error(f"Error setting CUDA paths: {str(e)}")
        logging.debug(traceback.format_exc())

3. Take it a Step Further and add cublas, runtime or whatever else... - e.g.

pip install nvidia-cuda-runtime-cu12==12.1.105
pip install nvidia-cublas-cu12==12.1.3.1
pip install nvidia-cuda-nvrtc-cu12==12.1.105
pip install [fill in the blank with nvidia library]

That way users wouldn't have to worry about installing CUDA/CUDNN globally at all.

Again, not sure why faster-whisper chooses not to update compatibility - I know it's a hassle - but perhaps this is a more elegant solution that would work regardless of whether

@NewUserHa
Copy link
Author

according to https://www.github.com/pytorch/pytorch/issues/100974, pip install torch will automatically install cuda/cudnn. (I tested it failed to though on windows, but it may work on linux)

therefore before faster-whisper is compatible with cudnn9, this PR should be a not-bad workaround

@aligokalppeker
Copy link

aligokalppeker commented Aug 13, 2024

@BBC-Esq faster-whisper is not compliant with cuddn 9 as ctranslate2 does not. It can not be compliant without a custom build of ctranslate2 as ctranslate2 is the core for all the cuda stuff.

@jhj0517
Copy link

jhj0517 commented Oct 4, 2024

I wish this would be merged as a workaround until OpenNMT/CTranslate2#1780 is fixed.

When you install faster-whisper in a completely new environment using Just By:

pip install faster-whisper torch --index-url https://download.pytorch.org/whl/cu121

torch will be installed as the latest and faster-whisper will not be usable because of this bug.

I hope this workaround will be merged for now until CTranslate2 really supports the cuDNN 9 build.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants