From e394a8e045cd2138914d965770baf9368d63acaf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Maximilian=20M=C3=BCller?= <44298237+gedoensmax@users.noreply.github.com> Date: Wed, 10 Jan 2024 02:11:48 +0100 Subject: [PATCH] [Docs] Adding CUDA Graph docs for TRT (#19064) ### Description Adding docs for the flag `trt_cuda_graph_enable`. --- docs/execution-providers/TensorRT-ExecutionProvider.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/execution-providers/TensorRT-ExecutionProvider.md b/docs/execution-providers/TensorRT-ExecutionProvider.md index af6b673c78898..89fdb895a05d2 100644 --- a/docs/execution-providers/TensorRT-ExecutionProvider.md +++ b/docs/execution-providers/TensorRT-ExecutionProvider.md @@ -99,6 +99,7 @@ There are two ways to configure TensorRT settings, either by **TensorRT Executio | trt_profile_min_shapes | ORT_TENSORRT_PROFILE_MIN_SHAPES | string | | trt_profile_max_shapes | ORT_TENSORRT_PROFILE_MAX_SHAPES | string | | trt_profile_opt_shapes | ORT_TENSORRT_PROFILE_OPT_SHAPES | string | +| trt_cuda_graph_enable | ORT_TENSORRT_CUDA_GRAPH_ENABLE | bool | > Note: for bool type options, assign them with **True**/**False** in python, or **1**/**0** in C++. @@ -179,6 +180,8 @@ TensorRT configurations can be set by execution provider options. It's useful wh * `trt_build_heuristics_enable`: Build engine using heuristics to reduce build time. +* `trt_cuda_graph_enable`: This will capture a [CUDA graph](https://developer.nvidia.com/blog/cuda-graphs/) which can drastically help for a network with many small layers as it reduces launch overhead on the CPU. + * `trt_sparsity_enable`: Control if sparsity can be used by TRT. * Check `--sparsity` in `trtexec` command-line flags for [details](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#trtexec-flags).