Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CUDA v12 support for Java onnxruntime_gpu build #20011

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

davidecaroselli
Copy link

Description

348e3c7 Add Dockerfile for Java release build with CUDA
c9974ef Add CUDA version to Java artifact version

Motivation and Context

Java build for the latest onnxruntime v1.17.1 is not (yet) available. Moreover existing Maven artifacts are not compatible with CUDA 12. This PR aims to fix this by providing two updates to onnxruntime:

  1. An update to java/build.gradle to automatically postfix the build tag to the version number (-cuXX for CUDA and -rocm for ROCM).
  2. A new Dockerfile that will compile onnxruntime with the --build_java option from scratch using the *-ubi8 base image for maximum compatibility

This PR fixes issue #19960 "Add CUDA12 support for Java's onnxruntime_gpu dependency"

@davidecaroselli
Copy link
Author

@microsoft-github-policy-service agree company="Translated"

@snnn snnn requested a review from jchen351 March 21, 2024 17:32
@snnn
Copy link
Member

snnn commented Mar 21, 2024

@jchen351, please help review.

@davidecaroselli
Copy link
Author

Hello! Any updates on this PR? I would like to follow up with a second PR that build onnxruntime for Java on ROCm.

@tianleiwu
Copy link
Contributor

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@tianleiwu
Copy link
Contributor

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-binary-size-checks-ci-pipeline,Big Models

Copy link

Azure Pipelines successfully started running 10 pipeline(s).

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

@snnn
Copy link
Member

snnn commented Apr 19, 2024

/azp run Windows ARM64 QNN CI Pipeline,Windows x64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,ONNX Runtime Web CI Pipeline,Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline

@snnn
Copy link
Member

snnn commented Apr 19, 2024

/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-binary-size-checks-ci-pipeline,Big Modelsa
/azp run Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,orttraining-amd-gpu-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,onnxruntime-binary-size-checks-ci-pipeline,Big Models

Copy link

Azure Pipelines successfully started running 10 pipeline(s).

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants