Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ONNXRT default CPU EP vs Openvino EP Performance #12316

Open
Darshvino opened this issue Jul 26, 2022 · 7 comments
Open

ONNXRT default CPU EP vs Openvino EP Performance #12316

Darshvino opened this issue Jul 26, 2022 · 7 comments
Labels
core runtime issues related to core runtime

Comments

@Darshvino
Copy link

Darshvino commented Jul 26, 2022

Hi ONNXRT team,

I was comparing the performance between ONNXRT default CPU EP vs Openvino EP performance; I found that Openvino EP is way faster than default CPU EP on Intel CPU.

Here are the timings with one thread(session_options.SetIntraOpNumThreads(1); ):

Resnet18:

Default CPU = 27 ms

OpenVino = 7 ms

One layer: Input shape: 1x256x56x56, Weight shape: 64x256x3x3

Default CPU = 7 ms

Openvino = 2 ms

May I know if these numbers seem to be right? which EP is faster on x86 for CNNs? And also wanted to get the comments on the above results.

Below is the info about the CPU I am using:

image

Look forward to your reply.

Thanks

@Darshvino
Copy link
Author

Look forward to the reply

Thanks

@yufenglee
Copy link
Member

Where the model comes from? And could you please check if openvino EP uses GPU for computing?

@Darshvino
Copy link
Author

Hi @yufenglee,

Thank you for your response.

This is the resnet18 model I am using, and it is from onnx model zoo: https://drive.google.com/file/d/1uQt_UYHluOfTq_DzMdlyz4OYe7aihP7X/view?usp=sharing

Yeah, that may be the possible reason for the speedup, I will check if it uses GPU at the backend. Also, will the dedicated GPU(NVIDIA) be used or the integrated GPU?

@Darshvino
Copy link
Author

@yufenglee,

Just to intimate you that I have built the ONNXRT for CPU FP32: ./build.sh --config RelWithDebInfo --use_openvino CPU_FP32 --build_shared_lib --parallel

@Darshvino
Copy link
Author

Darshvino commented Jul 28, 2022

Hi @yufenglee @snnn,

Look forward to your reply.

I believe Openvino is using multiple threads to run at the backend even though we set to run with one thread in ONNXRT.

@sophies927 sophies927 added core runtime issues related to core runtime and removed type:performance labels Aug 12, 2022
@weimeng23
Copy link

openvino use multiple threads ( = physical cores)
I Set IntraOpNumThreads=1 but it didn't work.
You can use htop to watch utilization

@weimeng23
Copy link

see The PR: #18596
up to now, the latest release is ONNX Runtime v1.16.3
Looks like this PR version hasn't been released yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core runtime issues related to core runtime
Projects
None yet
Development

No branches or pull requests

5 participants