diff --git a/docs/execution-providers/CUDA-ExecutionProvider.md b/docs/execution-providers/CUDA-ExecutionProvider.md
index dbd4255c57568..4b89ca80ca70c 100644
--- a/docs/execution-providers/CUDA-ExecutionProvider.md
+++ b/docs/execution-providers/CUDA-ExecutionProvider.md
@@ -30,14 +30,14 @@ Please reference table below for official GPU packages dependencies for the ONNX
ONNX Runtime Training is aligned with PyTorch CUDA versions; refer to the Training tab
on [onnxruntime.ai](https://onnxruntime.ai/) for supported versions.
-Note: Because of CUDA Minor Version Compatibility, Onnx Runtime built with CUDA 11.4 should be compatible with any CUDA
+Note: Because of CUDA Minor Version Compatibility, ONNX Runtime built with CUDA 11.8 should be compatible with any CUDA
11.x version.
Please
reference [Nvidia CUDA Minor Version Compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/#minor-version-compatibility).
| ONNX Runtime | CUDA | cuDNN | Notes |
|--------------------------|--------|-----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| 1.17 | 12.2 | 8.9.2.26 (Linux)
8.9.2.26 (Windows) | The default CUDA version for ORT 1.17 is CUDA 11.8. To install CUDA 12 package, please look at [Install ORT](../install).
Due to low demend on Java GPU package, only C++/C# Nuget and Python packages are released with CUDA 12.2 |
+| 1.17 | 12.2 | 8.9.2.26 (Linux)
8.9.2.26 (Windows) | The default CUDA version for ORT 1.17 is CUDA 11.8. To install CUDA 12 package, please look at [Install ORT](../install).
Due to low demand on Java GPU package, only C++/C# Nuget and Python packages are released with CUDA 12.2 |
| 1.15
1.16
1.17 | 11.8 | 8.2.4 (Linux)
8.5.0.96 (Windows) | Tested with CUDA versions from 11.6 up to 11.8, and cuDNN from 8.2.4 up to 8.7.0 |
| 1.14
1.13.1
1.13 | 11.6 | 8.2.4 (Linux)
8.5.0.96 (Windows) | libcudart 11.4.43
libcufft 10.5.2.100
libcurand 10.2.5.120
libcublasLt 11.6.5.2
libcublas 11.6.5.2
libcudnn 8.2.4 |
| 1.12
1.11 | 11.4 | 8.2.4 (Linux)
8.2.2.26 (Windows) | libcudart 11.4.43
libcufft 10.5.2.100
libcurand 10.2.5.120
libcublasLt 11.6.5.2
libcublas 11.6.5.2
libcudnn 8.2.4 |
diff --git a/docs/install/index.md b/docs/install/index.md
index 13bb63eb578df..73f610395eb4b 100644
--- a/docs/install/index.md
+++ b/docs/install/index.md
@@ -51,36 +51,10 @@ pip install onnxruntime-gpu
#### Install ONNX Runtime GPU (CUDA 12.x)
-For Cuda 12.x please using the following instructions to download the instruction
-from [ORT Azure Devops Feed](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/)
-
-1. Install keyring for Azure Artifacts
+For Cuda 12.x, please use the following instructions to install from [ORT Azure Devops Feed](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview)
```bash
-python -m pip install --upgrade pip
-pip install keyring artifacts-keyring
-```
-
-If you're using Linux, ensure you've installed the [prerequisites](https://go.microsoft.com/fwlink/?linkid=2103695),
-which are required for artifacts-keyring.
-
-2. Project setup
-
-Ensure you have installed the latest version of the Azure Artifacts keyring from the "Get the tools" menu.
-If you don't already have one, create a virtualenv
-using [these instructions](https://go.microsoft.com/fwlink/?linkid=2103878) from the official Python documentation.
-Per the instructions, "it is always recommended to use a virtualenv while developing Python applications."
-
-Add a pip.ini (Windows) or pip.conf (Linux) file to your virtualenv
-
-```bash
-pip install onnxruntime-gpu --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
-```
-
-3. Install onnxruntime-gpu
-
-```bash
-pip install onnxruntime-gpu
+pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
```
### Install ONNX to export the model
@@ -443,7 +417,8 @@ below:
|--------------|---------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|
| Python | If using pip, run `pip install --upgrade pip` prior to downloading. | | |
| | CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime) | [ort-nightly (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly/overview) | |
-| | GPU (CUDA/TensorRT): [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [ort-nightly-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) |
+| | GPU (CUDA/TensorRT) for CUDA 11.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [ort-nightly-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) |
+| | GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/) | [ort-nightly-gpu (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly/PyPI/ort-nightly-gpu/overview/) | [View](../execution-providers/CUDA-ExecutionProvider.md#requirements) |
| | GPU (DirectML): [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [ort-nightly-directml (dev)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview/) | [View](../execution-providers/DirectML-ExecutionProvider.md#requirements) |
| | OpenVINO: [**intel/onnxruntime**](https://github.com/intel/onnxruntime/releases/latest) - *Intel managed* | | [View](../build/eps.md#openvino) |
| | TensorRT (Jetson): [**Jetson Zoo**](https://elinux.org/Jetson_Zoo#ONNX_Runtime) - *NVIDIA managed* | | |