diff --git a/docs/build/eps.md b/docs/build/eps.md index ac6091654e593..32990f5dad959 100644 --- a/docs/build/eps.md +++ b/docs/build/eps.md @@ -100,7 +100,7 @@ See more information on the TensorRT Execution Provider [here](../execution-prov * The path to the CUDA `bin` directory must be added to the PATH environment variable so that `nvcc` is found. * The path to the cuDNN installation (path to cudnn bin/include/lib) must be provided via the cuDNN_PATH environment variable, or `--cudnn_home` parameter. * On Windows, cuDNN requires [zlibwapi.dll](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#install-zlib-windows). Feel free to place this dll under `path_to_cudnn/bin` - * Follow [instructions for install TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html) + * Follow [instructions for installing TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html) * The TensorRT execution provider for ONNX Runtime is built and tested up to TensorRT 8.6. * The path to TensorRT installation must be provided via the `--tensorrt_home` parameter. * ONNX Runtime uses TensorRT built-in parser from `tensorrt_home` by default.