From c0a085a12953c70fdd67699b9d1f82c2acf6885d Mon Sep 17 00:00:00 2001 From: Tianlei Wu Date: Fri, 26 Jan 2024 10:34:59 -0800 Subject: [PATCH] [CUDA] update python doc for user_compute_stream (#19245) ### Description Update python doc about user_compute_stream in CUDA python API for https://github.com/microsoft/onnxruntime/pull/19229. ### Motivation and Context --- docs/execution-providers/CUDA-ExecutionProvider.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/docs/execution-providers/CUDA-ExecutionProvider.md b/docs/execution-providers/CUDA-ExecutionProvider.md index f73df1ebf0909..33861a896b712 100644 --- a/docs/execution-providers/CUDA-ExecutionProvider.md +++ b/docs/execution-providers/CUDA-ExecutionProvider.md @@ -61,7 +61,15 @@ Default value: 0 Defines the compute stream for the inference to run on. It implicitly sets the `has_user_compute_stream` option. It cannot be set through `UpdateCUDAProviderOptions`, but rather `UpdateCUDAProviderOptionsWithValue`. This cannot be used in combination with an external allocator. -This can not be set using the python API. + +Example python usage: +```python +providers = [("CUDAExecutionProvider", {"device_id":torch.cuda.current_device(), "user_compute_stream": str(torch.cuda.current_stream().cuda_stream)})] +sess_options = ort.SessionOptions() +sess = ort.InferenceSession("my_model.onnx", sess_options=sess_options, providers=providers) +``` + +To take advantage of user compute stream, it is recommended to use [I/O Binding](../api/python/api_summary.html#data-on-device) to bind inputs and outputs to tensors in device. ### do_copy_in_default_stream Whether to do copies in the default stream or use separate streams. The recommended setting is true. If false, there are race conditions and possibly better performance.