Skip to content

Commit

Permalink
update cuda 12 gpu package installtion
Browse files Browse the repository at this point in the history
  • Loading branch information
tianleiwu committed Feb 3, 2024
1 parent 9a8eda4 commit 5ce20b6
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 31 deletions.
4 changes: 2 additions & 2 deletions docs/execution-providers/CUDA-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,14 +30,14 @@ Please reference table below for official GPU packages dependencies for the ONNX
ONNX Runtime Training is aligned with PyTorch CUDA versions; refer to the Training tab
on [onnxruntime.ai](https://onnxruntime.ai/) for supported versions.

Note: Because of CUDA Minor Version Compatibility, Onnx Runtime built with CUDA 11.4 should be compatible with any CUDA
Note: Because of CUDA Minor Version Compatibility, ONNX Runtime built with CUDA 11.8 should be compatible with any CUDA
11.x version.
Please
reference [Nvidia CUDA Minor Version Compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/#minor-version-compatibility).

| ONNX Runtime | CUDA | cuDNN | Notes |
|--------------------------|--------|-----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.17 | 12.2 | 8.9.2.26 (Linux)<br/>8.9.2.26 (Windows) | The default CUDA version for ORT 1.17 is CUDA 11.8. To install CUDA 12 package, please look at [Install ORT](../install).<br>Due to low demend on Java GPU package, only C++/C# Nuget and Python packages are released with CUDA 12.2 |
| 1.17 | 12.2 | 8.9.2.26 (Linux)<br/>8.9.2.26 (Windows) | The default CUDA version for ORT 1.17 is CUDA 11.8. To install CUDA 12 package, please look at [Install ORT](../install).<br>Due to low demand on Java GPU package, only C++/C# Nuget and Python packages are released with CUDA 12.2 |
| 1.15<br>1.16<br>1.17 | 11.8 | 8.2.4 (Linux)<br/>8.5.0.96 (Windows) | Tested with CUDA versions from 11.6 up to 11.8, and cuDNN from 8.2.4 up to 8.7.0 |
| 1.14<br/>1.13.1<br/>1.13 | 11.6 | 8.2.4 (Linux)<br/>8.5.0.96 (Windows) | libcudart 11.4.43<br/>libcufft 10.5.2.100<br/>libcurand 10.2.5.120<br/>libcublasLt 11.6.5.2<br/>libcublas 11.6.5.2<br/>libcudnn 8.2.4 |
| 1.12<br/>1.11 | 11.4 | 8.2.4 (Linux)<br/>8.2.2.26 (Windows) | libcudart 11.4.43<br/>libcufft 10.5.2.100<br/>libcurand 10.2.5.120<br/>libcublasLt 11.6.5.2<br/>libcublas 11.6.5.2<br/>libcudnn 8.2.4 |
Expand Down
33 changes: 4 additions & 29 deletions docs/install/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,36 +51,10 @@ pip install onnxruntime-gpu

#### Install ONNX Runtime GPU (CUDA 12.x)

For Cuda 12.x please using the following instructions to download the instruction
from [ORT Azure Devops Feed](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/)

1. Install keyring for Azure Artifacts
For Cuda 12.x, please use the following instructions to install from [ORT Azure Devops Feed](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview)

```bash
python -m pip install --upgrade pip
pip install keyring artifacts-keyring
```

If you're using Linux, ensure you've installed the [prerequisites](https://go.microsoft.com/fwlink/?linkid=2103695),
which are required for artifacts-keyring.

2. Project setup

Ensure you have installed the latest version of the Azure Artifacts keyring from the "Get the tools" menu.
If you don't already have one, create a virtualenv
using [these instructions](https://go.microsoft.com/fwlink/?linkid=2103878) from the official Python documentation.
Per the instructions, "it is always recommended to use a virtualenv while developing Python applications."

Add a pip.ini (Windows) or pip.conf (Linux) file to your virtualenv

```bash
pip install onnxruntime-gpu --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
```

3. Install onnxruntime-gpu

```bash
pip install onnxruntime-gpu
pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
```

### Install ONNX to export the model
Expand Down Expand Up @@ -443,7 +417,8 @@ below:
|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|
| Python | If using pip, run `pip install --upgrade pip` prior to downloading. | | |
| | CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime) | [ort-nightly (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly/overview) | |
| | GPU (CUDA/TensorRT): [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [ort-nightly-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) |
| | GPU (CUDA/TensorRT) for CUDA 11.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [ort-nightly-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) |
| | GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/) | [ort-nightly-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) |
| | GPU (DirectML): [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [ort-nightly-directml (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) |
| | OpenVINO: [**intel/onnxruntime**](https://github.com/intel/onnxruntime/releases/latest) - *Intel managed* | | [View](../build/eps.md#openvino) |
| | TensorRT (Jetson): [**Jetson Zoo**](https://elinux.org/Jetson_Zoo#ONNX_Runtime) - *NVIDIA managed* | | |
Expand Down

0 comments on commit 5ce20b6

Please sign in to comment.