diff --git a/docs/build/eps.md b/docs/build/eps.md index 5b7044c4b6eb1..32990f5dad959 100644 --- a/docs/build/eps.md +++ b/docs/build/eps.md @@ -100,7 +100,7 @@ See more information on the TensorRT Execution Provider [here](../execution-prov * The path to the CUDA `bin` directory must be added to the PATH environment variable so that `nvcc` is found. * The path to the cuDNN installation (path to cudnn bin/include/lib) must be provided via the cuDNN_PATH environment variable, or `--cudnn_home` parameter. * On Windows, cuDNN requires [zlibwapi.dll](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#install-zlib-windows). Feel free to place this dll under `path_to_cudnn/bin` - * Install [TensorRT](https://developer.nvidia.com/tensorrt) + * Follow [instructions for installing TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html) * The TensorRT execution provider for ONNX Runtime is built and tested up to TensorRT 8.6. * The path to TensorRT installation must be provided via the `--tensorrt_home` parameter. * ONNX Runtime uses TensorRT built-in parser from `tensorrt_home` by default. diff --git a/docs/execution-providers/TensorRT-ExecutionProvider.md b/docs/execution-providers/TensorRT-ExecutionProvider.md index a304c412705b3..176cfc093117c 100644 --- a/docs/execution-providers/TensorRT-ExecutionProvider.md +++ b/docs/execution-providers/TensorRT-ExecutionProvider.md @@ -29,7 +29,8 @@ See [Build instructions](../build/eps.md#tensorrt). | ONNX Runtime | TensorRT | CUDA | |:-------------|:---------|:-------| -| 1.15-main | 8.6 | 11.8 | +| 1.16-main | 8.6 | 11.8 | +| 1.15 | 8.6 | 11.8 | | 1.14 | 8.5 | 11.6 | | 1.12-1.13 | 8.4 | 11.4 | | 1.11 | 8.2 | 11.4 |