Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ORT 1.17.0 Release Candidates available for testing #19236

Open
YUNQIUGUO opened this issue Jan 23, 2024 · 21 comments
Open

ORT 1.17.0 Release Candidates available for testing #19236

YUNQIUGUO opened this issue Jan 23, 2024 · 21 comments
Labels
api:Java issues related to the Java API ep:CUDA issues related to the CUDA execution provider ep:DML issues related to the DirectML execution provider ep:TensorRT issues related to TensorRT execution provider platform:web issues related to ONNX Runtime web; typically submitted using template release:1.17.0
Milestone

Comments

@YUNQIUGUO
Copy link
Contributor

YUNQIUGUO commented Jan 23, 2024

ORT 1.17 will be released in late January. Release candidate builds are available now for testing. If you encounter issues, please report them by responding in this issue.

Release branch: rel-1.17.0
Release manager: @YUNQIUGUO

Pypi Nuget npm Maven (Java)
CPU: 1.17.0.dev20240118001
GPU: 1.17.0.dev20240118001
CPU: 1.17.0-dev-20240119-0139-a63b71eadb
GPU (CUDA/TRT): 1.17.0-dev-20240118-2301-a63b71eadb
DirectML: 1.17.0-dev-20240119-0131-a63b71eadb
WindowsAI: 1.17.0-dev-20240119-0131-a63b71eadb
onnxruntime-node: 1.17.0-dev.20240118-a63b71eadb
onnxruntime-react-native 1.17.0-dev.20240118-a63b71eadb
onnxruntime-web 1.17.0-dev.20240118-a63b71eadb
CPU: 1.17.0-rc1
GPU: 1.17.0-rc1

Describe scenario use case

not applicable.

@YUNQIUGUO YUNQIUGUO added this to the 1.17 milestone Jan 23, 2024
@github-actions github-actions bot added api:Java issues related to the Java API ep:CUDA issues related to the CUDA execution provider ep:DML issues related to the DirectML execution provider ep:TensorRT issues related to TensorRT execution provider platform:web issues related to ONNX Runtime web; typically submitted using template labels Jan 23, 2024
@YUNQIUGUO YUNQIUGUO pinned this issue Jan 23, 2024
@fdwr
Copy link
Contributor

fdwr commented Jan 23, 2024

@martinb35 / @smk2007

@tianleiwu
Copy link
Contributor

Please share CUDA 12 packages for Python and Nuget.

@dbuades
Copy link

dbuades commented Jan 24, 2024

Agree, prebuilt CUDA 12.1 packages would be really appreciated. Like you started doing with this nightly: https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly/PyPI/ort-nightly-gpu/versions/1.17.0.dev20231205004

@YUNQIUGUO
Copy link
Contributor Author

Here's the CUDA 12 version RC packages for Python and Nuget: (based on the same commit id)

ort-nightly-gpu : 1.17.0.dev20240118002

Microsoft.ML.OnnxRuntime.Managed:1.17.0-dev-20240118-2235-a63b71eadb

Microsoft.ML.OnnxRuntime.Gpu: 1.17.0-dev-20240118-2235-a63b71eadb

Microsoft.ML.OnnxRuntime.Gpu.Linux:1.17.0-dev-20240118-2235-a63b71eadb

Microsoft.ML.OnnxRuntime.Gpu.Windows:1.17.0-dev-20240118-2235-a63b71eadb

@sophies927
Copy link
Contributor

@HectorSVC why was this issue unpinned? We pin our release candidates in GitHub issues so our partners + community members have easier access and can test them.

@tianleiwu tianleiwu unpinned this issue Feb 2, 2024
@adityagoel4512
Copy link
Contributor

adityagoel4512 commented Feb 2, 2024

Thanks for managing the release @YUNQIUGUO. Given a PyPI release has been made for 1.17, is there a plan to tag and release it on GitHub as well? For context, the onnxruntime feedstock on conda-forge typically uses the tagged github release to build ORT.

@YUNQIUGUO
Copy link
Contributor Author

Thanks for managing the release @YUNQIUGUO. Given a PyPI release has been made for 1.17, is there a plan to tag and release it on GitHub as well?

yep, we are still waiting for the last couple packages to be uploaded to the package management repo. after everything completed, will do a release announcement with the 1.17.0 package assets on Github as well.

@dbuades
Copy link

dbuades commented Feb 2, 2024

Thanks for the new release!
However, I see that a CUDA 12 version of the python onnxruntime-gpu package wasn't included in the pypy release. I tested the CUDA 12 RC packages that @YUNQIUGUO published in this thread last week and everything was working well, so I believe releasing a 1.17 version would be very useful.

Do you have plans to upload them at a later date or is there a particular reason why you are choosing not to do it?

Thank you very much!

@fvdnabee
Copy link

fvdnabee commented Feb 2, 2024

The cuda 12 instructions are here: https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x It seems the onnxruntime-gpu package doesn't exist on AzDo, I only found ort-nightly-gpu==1.17.0.dev20240130002 so far. Could we release a properly named artifact for 1.17, as opposed to the nightly build?

For pypi, i'm not sure pypi can release onnxruntime-gpu for more than one major version of cuda. E.g. pytorch hosts a separate repo for its cuda11 py packages.

@tianleiwu
Copy link
Contributor

tianleiwu commented Feb 2, 2024

@YUNQIUGUO, please upload onnxruntime-gpu 1.17.0 CUDA 12 python package to https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/

@tianleiwu tianleiwu pinned this issue Feb 2, 2024
@dbuades
Copy link

dbuades commented Feb 2, 2024

You are right @fvdnabee , thank you for reformulating my request.

@YUNQIUGUO
Copy link
Contributor Author

YUNQIUGUO commented Feb 2, 2024

@fvdnabee

The cuda 12 instructions are here: https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x It seems the onnxruntime-gpu package doesn't exist on AzDo, I only found ort-nightly-gpu==1.17.0.dev20240130002 so far. Could we release a properly named artifact for 1.17, as opposed to the nightly build?

Thanks for reporting this. I am guessing our Python Cuda 12 packaging pipeline probably lacks a release version configuration/nightly build option. I actually uploaded onnxruntime-gpu to the Cuda 12 official feed but looks like it still contains a nightly in the naming though it should be an official one instead. I will contact the pipeline owner and we'll look into address this issue and re-upload an official named 1.17.0 package.
Since it's the first time release cuda 12 version wheels to the feed, sorry about that - the issue hasn't be identified before.

@dbuades I am not aware of a plan to upload it to an official repo like pypi.org yet. will ask around.

@0x0480
Copy link

0x0480 commented Feb 2, 2024

I'm curious if onnxruntime-node now supports dml and cuda?

@YUNQIUGUO
Copy link
Contributor Author

@fs-eire

@fs-eire
Copy link
Contributor

fs-eire commented Feb 2, 2024

No. DML is ongoing (#19274) and CUDA support is the next.

I'm curious if onnxruntime-node now supports dml and cuda?

@YUNQIUGUO
Copy link
Contributor Author

@dbuades @fvdnabee Hey, the issue has been resolved now. and here's the official onnxruntime-gpu package for cuda 12 version: https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.17.0

@YUNQIUGUO YUNQIUGUO unpinned this issue Feb 3, 2024
@fs-eire
Copy link
Contributor

fs-eire commented Feb 16, 2024

No. DML is ongoing (#19274) and CUDA support is the next.

I'm curious if onnxruntime-node now supports dml and cuda?

#19274 is merged in main and marked for patch release 1.17.1, pending approval.

@CaelumF
Copy link

CaelumF commented Feb 21, 2024

I'm curious if there is any work towards DML for onnxruntime-java? Where might I be pointed to make that a thing? (I did search issues... seems not discussed yet?)

@fdwr
Copy link
Contributor

fdwr commented Feb 22, 2024

I'm curious if there is any work towards DML for onnxruntime-java? Where might I be pointed to make that a thing? (I did search issues... seems not discussed yet?)

@CaelumF 🤔 I haven't heard desire for that combination of language + backend before (more often DML is accessed via C++, Python, and C#), but is it already supported? I see this enum ai.onnxruntime.OrtProvider.DIRECT_ML. (alas I don't know who on ORT owns the Java language layer to verify)

@CaelumF
Copy link

CaelumF commented Feb 26, 2024

@fdwr hey thanks for responding. The pre built jars do have functions for enabling DirectML, but the binary wasn't compiled with DirectML enabled and errors when attempting to use it, building with it enabled has some trouble. Issue posted here #19656 which also includes my use case in case you're curious

@Craigacp
Copy link
Contributor

I'm the maintainer of the Java layer, I'll have a look at what's going on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api:Java issues related to the Java API ep:CUDA issues related to the CUDA execution provider ep:DML issues related to the DirectML execution provider ep:TensorRT issues related to TensorRT execution provider platform:web issues related to ONNX Runtime web; typically submitted using template release:1.17.0
Projects
None yet
Development

No branches or pull requests