-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNXRT default CPU EP vs Openvino EP Performance #12316
Comments
Look forward to the reply Thanks |
Where the model comes from? And could you please check if openvino EP uses GPU for computing? |
Hi @yufenglee, Thank you for your response. This is the resnet18 model I am using, and it is from onnx model zoo: https://drive.google.com/file/d/1uQt_UYHluOfTq_DzMdlyz4OYe7aihP7X/view?usp=sharing Yeah, that may be the possible reason for the speedup, I will check if it uses GPU at the backend. Also, will the dedicated GPU(NVIDIA) be used or the integrated GPU? |
Just to intimate you that I have built the ONNXRT for CPU FP32: |
Hi @yufenglee @snnn, Look forward to your reply. I believe Openvino is using multiple threads to run at the backend even though we set to run with one thread in ONNXRT. |
openvino use multiple threads ( = physical cores) |
see The PR: #18596 |
Hi ONNXRT team,
I was comparing the performance between ONNXRT default CPU EP vs Openvino EP performance; I found that Openvino EP is way faster than default CPU EP on Intel CPU.
Here are the timings with one thread(
session_options.SetIntraOpNumThreads(1);
):Resnet18:
Default CPU = 27 ms
OpenVino = 7 ms
One layer: Input shape: 1x256x56x56, Weight shape: 64x256x3x3
Default CPU = 7 ms
Openvino = 2 ms
May I know if these numbers seem to be right? which EP is faster on x86 for CNNs? And also wanted to get the comments on the above results.
Below is the info about the CPU I am using:
Look forward to your reply.
Thanks
The text was updated successfully, but these errors were encountered: