Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance] fp16 model performance decreases when the "inter op threads" setting is greater than 1. #18822

Closed
yuandcc opened this issue Dec 14, 2023 · 2 comments
Labels
quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot

Comments

@yuandcc
Copy link

yuandcc commented Dec 14, 2023

Describe the issue

I converted the fp32 model to an fp16 model through the "convert_float_to_float16" and tested the inference time with the same data. When intra_op_threads is equal to 1, 4, and 8, the cost time of the fp32 model meets expectations. The cost time of the fp16 model meets expectations when intra_op_threads is equal to 1, but does not meet expectations when intra_op_threads is equal to 4 or 8.
image

To reproduce

Using both the fp32 model and the fp16 model, test different values of intra_op_threads.

Urgency

No response

Platform

Linux

OS Version

Ububtu 20.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.16.3

ONNX Runtime API

C++

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

Model File

No response

Is this a quantized model?

Yes

@github-actions github-actions bot added the quantization issues related to quantization label Dec 14, 2023
Copy link
Contributor

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Jan 13, 2024
Copy link
Contributor

This issue has been automatically closed due to inactivity. Please reactivate if further support is needed.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
quantization issues related to quantization stale issues that have not been addressed in a while; categorized by a bot
Projects
None yet
Development

No branches or pull requests

1 participant