How to determine which provider current inference is using? #22243
Labels
core runtime
issues related to core runtime
stale
issues that have not been addressed in a while; categorized by a bot
Hi, I am a newcomer to the community and have some questions to ask.
The phenomenon I encountered is that for some reason, the inference process automatically switched from GPU to CPU.
The specific phenomenon is:
From 0 to 60 minutes, GPU memory occupied 500M, GPU utilization was about 50%, and CPU utilization was about 30%
After 60 minutes, GPU memory occupied 500M, GPU utilization was dropped to 0%, and CPU utilization was increased to 100%
Therefore, now I want to confirm which provider is used for each inference, monitor each inference, and automatically switch it to GPU provider if the last inference is on CPU.
After some searching, I found this GetAvailableProviders() function in the document, but the result it returns does not seem to accurately reflect the providers that can currently be executed normally.
I checked this issue 486 and the document, but still couldn't find the desired interface.
Currently using onnx 1.12.1 version using C++ API
Thanks very much.
The text was updated successfully, but these errors were encountered: