Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Making the cuda version explicit in the python package name #19438

Open
martinResearch opened this issue Feb 6, 2024 · 2 comments
Labels
ep:CUDA issues related to the CUDA execution provider feature request request for unsupported feature or enhancement release:1.17.0

Comments

@martinResearch
Copy link

Describe the feature request

Onnxruntime-gpu packages can support either CUDA 11 or CUDA 12
As described in the documentation here one needs to choose different
python packages index urls to choose which version of CUDA is supported.

Given a version number we do not know which CUDA version is supported unless we know what package index url was used to do the install, which does not seems ideal in term of handling python dependencies in complex engineering systems. This is for example making thing very complicated when using a private python package repository with a mechanism that caches wheels from upstream feeds. In this case we cannot have both packages (the one for cuda11 and cuda12) in the cache because they have the same name.

In comparison CuPy uses package naming convention where the CUDA version is explicitly as part of the package name (cupy-cuda12x, cupy-cuda11x). This make things much more convenient.

Could potentially the same be done for onnxtuneime-gpu i.e. could we use packages names onnxtuneime-gpu-cuda11x and onnxtuneime-gpu-cuda12x and have both on pypi.org ?
Or is the plan to stop supporting cuda11 altogether in future onnxruntime-gpu releases?

Describe scenario use case

Caching onnxruntime-gpu wheels with cuda 11 and cuda 11 in a private packages feed.

@martinResearch martinResearch added the feature request request for unsupported feature or enhancement label Feb 6, 2024
@martinResearch
Copy link
Author

Another approach taken by pytorch is to make the CUDA version explicit in the version of the package, for example torch==2.0.0+cu118

@martinResearch
Copy link
Author

As there been progress or plan to tackle this problem? thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider feature request request for unsupported feature or enhancement release:1.17.0
Projects
None yet
Development

No branches or pull requests

2 participants