Skip to content

Commit

Permalink
Merge branch 'gh-pages' into sophies927-patch-3
Browse files Browse the repository at this point in the history
  • Loading branch information
sophies927 authored Jan 31, 2024
2 parents 833f5a3 + c189c5e commit 9fdf960
Show file tree
Hide file tree
Showing 5 changed files with 307 additions and 131 deletions.
17 changes: 9 additions & 8 deletions docs/build/eps.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,13 +100,13 @@ See more information on the TensorRT Execution Provider [here](../execution-prov
{: .no_toc }

* Install [CUDA](https://developer.nvidia.com/cuda-toolkit) and [cuDNN](https://developer.nvidia.com/cudnn)
* The TensorRT execution provider for ONNX Runtime is built and tested up to CUDA 11.8 and cuDNN 8.9. Check [here](https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements) for more version information.
* The TensorRT execution provider for ONNX Runtime is built and tested with CUDA 11.8, 12.2 and cuDNN 8.9. Check [here](https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements) for more version information.
* The path to the CUDA installation must be provided via the CUDA_PATH environment variable, or the `--cuda_home` parameter. The CUDA path should contain `bin`, `include` and `lib` directories.
* The path to the CUDA `bin` directory must be added to the PATH environment variable so that `nvcc` is found.
* The path to the cuDNN installation (path to cudnn bin/include/lib) must be provided via the cuDNN_PATH environment variable, or `--cudnn_home` parameter.
* On Windows, cuDNN requires [zlibwapi.dll](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#install-zlib-windows). Feel free to place this dll under `path_to_cudnn/bin`
* Follow [instructions for installing TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html)
* The TensorRT execution provider for ONNX Runtime is built and tested up to TensorRT 8.6.
* The TensorRT execution provider for ONNX Runtime is built and tested with TensorRT 8.6.
* The path to TensorRT installation must be provided via the `--tensorrt_home` parameter.
* ONNX Runtime uses TensorRT built-in parser from `tensorrt_home` by default.
* To use open-sourced [onnx-tensorrt](https://github.com/onnx/onnx-tensorrt/tree/main) parser instead, add `--use_tensorrt_oss_parser` parameter in build commands below.
Expand Down Expand Up @@ -147,7 +147,7 @@ Dockerfile instructions are available [here](https://github.com/microsoft/onnxru
### Build Instructions
{: .no_toc }

These instructions are for the latest [JetPack SDK 5.1.2](https://developer.nvidia.com/embedded/jetpack-sdk-512).
These instructions are for the latest [JetPack SDK 6.0](https://developer.nvidia.com/embedded/jetpack-sdk-60dp) for Jetson Orin.

1. Clone the ONNX Runtime repo on the Jetson host

Expand All @@ -159,9 +159,10 @@ These instructions are for the latest [JetPack SDK 5.1.2](https://developer.nvid

1. Starting with **CUDA 11.8**, Jetson users on **JetPack 5.0+** can upgrade to the latest CUDA release without updating the JetPack version or Jetson Linux BSP (Board Support Package). CUDA version 11.8 with JetPack 5.1.2 has been tested on Jetson when building ONNX Runtime 1.16.

1. Check [this official blog](https://developer.nvidia.com/blog/simplifying-cuda-upgrades-for-nvidia-jetson-users/) for CUDA 11.8 upgrade instruction.
1. Check [this official blog](https://developer.nvidia.com/blog/simplifying-cuda-upgrades-for-nvidia-jetson-users/) for CUDA upgrade instruction.

2. CUDA 12.x is only available to Jetson Orin and newer series (CUDA compute capability >= 8.7). Check [here](https://developer.nvidia.com/cuda-gpus#collapse5) for compute capability datasheet.
JetPack 6.0 comes preinstalled with CUDA 12.2

2. CMake can't automatically find the correct `nvcc` if it's not in the `PATH`. `nvcc` can be added to `PATH` via:

Expand All @@ -175,15 +176,15 @@ These instructions are for the latest [JetPack SDK 5.1.2](https://developer.nvid
export CUDACXX="/usr/local/cuda/bin/nvcc"
```

3. Install the ONNX Runtime build dependencies on the Jetpack 5.1.2 host:
3. Install the ONNX Runtime build dependencies on the Jetpack host:

```bash
sudo apt install -y --no-install-recommends \
build-essential software-properties-common libopenblas-dev \
libpython3.8-dev python3-pip python3-dev python3-setuptools python3-wheel
```

4. Cmake is needed to build ONNX Runtime. For ONNX Runtime 1.16, the minimum required CMake version is 3.26 (version 3.27.4 has been tested). This can be either installed by:
4. Cmake is needed to build ONNX Runtime. The minimum required CMake version is 3.26 (version 3.27.4 has been tested). This can be either installed by:

1. (Unix/Linux) Build from source. Download sources from [https://cmake.org/download/](https://cmake.org/download/)
and follow [https://cmake.org/install/](https://cmake.org/install/) to build from source.
Expand Down Expand Up @@ -249,7 +250,7 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
* [Windows - CPU, GPU](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html?VERSION=v_2023_1_0&OP_SYSTEM=WINDOWS&DISTRIBUTION=ARCHIVE).
* [Linux - CPU, GPU](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html?VERSION=v_2023_1_0&OP_SYSTEM=LINUX&DISTRIBUTION=ARCHIVE)

Follow [documentation](https://docs.openvino.ai/) for detailed instructions.
Follow [documentation](https://docs.openvino.ai/latest/index.html) for detailed instructions.

*2023.1 is the recommended OpenVINO™ version. [OpenVINO™ 2022.1](https://docs.openvino.ai/archive/2022.1/index.html) is minimal OpenVINO™ version requirement.*
*The minimum ubuntu version to support 2023.1 is 18.04.*
Expand Down Expand Up @@ -290,7 +291,7 @@ See more information on the OpenVINO™ Execution Provider [here](../execution-p
* `--use_openvino` builds the OpenVINO™ Execution Provider in ONNX Runtime.
* `<hardware_option>`: Specifies the default hardware target for building OpenVINO™ Execution Provider. This can be overriden dynamically at runtime with another option (refer to [OpenVINO™-ExecutionProvider](../execution-providers/OpenVINO-ExecutionProvider.md#summary-of-options) for more details on dynamic device selection). Below are the options for different Intel target devices.
Refer to [Intel GPU device naming convention](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_GPU.html#device-naming-convention) for specifying the correct hardware target in cases where both integrated and discrete GPU's co-exist.
Refer to [Intel GPU device naming convention](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_GPU.html#device-naming-convention) for specifying the correct hardware target in cases where both integrated and discrete GPU's co-exist.

| Hardware Option | Target Device |
| --------------- | ------------------------|
Expand Down
Loading

0 comments on commit 9fdf960

Please sign in to comment.