-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ORT 1.17.0 Release Candidates available for testing #19236
Comments
Please share CUDA 12 packages for Python and Nuget. |
Agree, prebuilt CUDA 12.1 packages would be really appreciated. Like you started doing with this nightly: https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly/PyPI/ort-nightly-gpu/versions/1.17.0.dev20231205004 |
Here's the CUDA 12 version RC packages for Python and Nuget: (based on the same commit id) ort-nightly-gpu : 1.17.0.dev20240118002 Microsoft.ML.OnnxRuntime.Managed:1.17.0-dev-20240118-2235-a63b71eadb Microsoft.ML.OnnxRuntime.Gpu: 1.17.0-dev-20240118-2235-a63b71eadb Microsoft.ML.OnnxRuntime.Gpu.Linux:1.17.0-dev-20240118-2235-a63b71eadb Microsoft.ML.OnnxRuntime.Gpu.Windows:1.17.0-dev-20240118-2235-a63b71eadb |
@HectorSVC why was this issue unpinned? We pin our release candidates in GitHub issues so our partners + community members have easier access and can test them. |
Thanks for managing the release @YUNQIUGUO. Given a PyPI release has been made for 1.17, is there a plan to tag and release it on GitHub as well? For context, the onnxruntime feedstock on conda-forge typically uses the tagged github release to build ORT. |
yep, we are still waiting for the last couple packages to be uploaded to the package management repo. after everything completed, will do a release announcement with the 1.17.0 package assets on Github as well. |
Thanks for the new release! Do you have plans to upload them at a later date or is there a particular reason why you are choosing not to do it? Thank you very much! |
The cuda 12 instructions are here: https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x It seems the For pypi, i'm not sure pypi can release onnxruntime-gpu for more than one major version of cuda. E.g. pytorch hosts a separate repo for its cuda11 py packages. |
@YUNQIUGUO, please upload onnxruntime-gpu 1.17.0 CUDA 12 python package to https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/ |
You are right @fvdnabee , thank you for reformulating my request. |
Thanks for reporting this. I am guessing our Python Cuda 12 packaging pipeline probably lacks a release version configuration/nightly build option. I actually uploaded @dbuades I am not aware of a plan to upload it to an official repo like pypi.org yet. will ask around. |
I'm curious if onnxruntime-node now supports dml and cuda? |
No. DML is ongoing (#19274) and CUDA support is the next.
|
@dbuades @fvdnabee Hey, the issue has been resolved now. and here's the official onnxruntime-gpu package for cuda 12 version: https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.17.0 |
I'm curious if there is any work towards DML for onnxruntime-java? Where might I be pointed to make that a thing? (I did search issues... seems not discussed yet?) |
@CaelumF 🤔 I haven't heard desire for that combination of language + backend before (more often DML is accessed via C++, Python, and C#), but is it already supported? I see this enum ai.onnxruntime.OrtProvider.DIRECT_ML. (alas I don't know who on ORT owns the Java language layer to verify) |
@fdwr hey thanks for responding. The pre built jars do have functions for enabling DirectML, but the binary wasn't compiled with DirectML enabled and errors when attempting to use it, building with it enabled has some trouble. Issue posted here #19656 which also includes my use case in case you're curious |
I'm the maintainer of the Java layer, I'll have a look at what's going on. |
ORT 1.17 will be released in late January. Release candidate builds are available now for testing. If you encounter issues, please report them by responding in this issue.
Release branch: rel-1.17.0
Release manager: @YUNQIUGUO
GPU: 1.17.0.dev20240118001
GPU (CUDA/TRT): 1.17.0-dev-20240118-2301-a63b71eadb
DirectML: 1.17.0-dev-20240119-0131-a63b71eadb
WindowsAI: 1.17.0-dev-20240119-0131-a63b71eadb
onnxruntime-react-native 1.17.0-dev.20240118-a63b71eadb
onnxruntime-web 1.17.0-dev.20240118-a63b71eadb
GPU: 1.17.0-rc1
Describe scenario use case
not applicable.
The text was updated successfully, but these errors were encountered: