Skip to content

Commit

Permalink
Update TensorRT documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
jingyanwangms committed Nov 13, 2024
1 parent a496723 commit 4aed926
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 0 deletions.
2 changes: 2 additions & 0 deletions docs/build/eps.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,8 @@ See more information on the TensorRT Execution Provider [here](../execution-prov

Dockerfile instructions are available [here](https://github.com/microsoft/onnxruntime/tree/main/dockerfiles#tensorrt)

**Note** Building with `--use_tensorrt_oss_parser` with TensorRT 8.X requires additional flag --cmake_extra_defines onnxruntime_USE_FULL_PROTOBUF=ON

---

## NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin
Expand Down
4 changes: 4 additions & 0 deletions docs/execution-providers/TensorRT-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -824,3 +824,7 @@ This example shows how to run the Faster R-CNN model on TensorRT execution provi
```

Please see [this Notebook](https://github.com/microsoft/onnxruntime/blob/main/docs/python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb) for an example of running a model on GPU using ONNX Runtime through Azure Machine Learning Services.

## Known Issues
- TensorRT 8.6 built-in parser and TensorRT oss parser behaves differently. Namely built-in parser cannot recognize some custom plugin ops while OSS parser can. See [EfficientNMS_TRT missing attribute class_agnostic w/ TensorRT 8.6
](https://github.com/microsoft/onnxruntime/issues/16121).

0 comments on commit 4aed926

Please sign in to comment.