You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But currently there is no way to enable NPU and still perform the inference process on GPU. I followed the way in that document (https://onnxruntime.ai/docs/build/eps.html#openvino) to build different EPs for ONNX Runtime, but both failed.
Currently when I run it through the following code, the output is shown below.
DmlExecutionProvider
CPUExecutionProvider
I would like to know how I can configure this to allow the project to reason on the NPU?
The text was updated successfully, but these errors were encountered:
Hello big guys, I have downloaded a project from Github(https://github.com/microsoft/ai-powered-notes-winui3-sample?tab=readme-ov-file) and now I want to adjust the EP of the project to OPENVINO. As shown below.
But currently there is no way to enable NPU and still perform the inference process on GPU. I followed the way in that document (https://onnxruntime.ai/docs/build/eps.html#openvino) to build different EPs for ONNX Runtime, but both failed.
Currently when I run it through the following code, the output is shown below.
DmlExecutionProvider
CPUExecutionProvider
I would like to know how I can configure this to allow the project to reason on the NPU?
The text was updated successfully, but these errors were encountered: