Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Querying intermediate results #1839

Open
rajesh-s opened this issue Aug 27, 2024 · 1 comment
Open

Querying intermediate results #1839

rajesh-s opened this issue Aug 27, 2024 · 1 comment

Comments

@rajesh-s
Copy link

rajesh-s commented Aug 27, 2024

I am running MLPerf Inference datacenter suite on a CPU only device following the instructions on the documentation.

The suggested sample size/query counts seem to take a very long time to reach completion.

  1. Would it be possible to query intermediate results (such as throughput) when the benchmark is executing?
  2. How are the sample sizes correlated with the accuracy of results? For instance, does llama2 CPU run need the same sample count (24576) as GPU? This is suggested here

I see the following prints on my terminal, but I am not sure how to interpret these results:
image

@arjunsuresh
Copy link
Contributor

  1. You can do --execution_mode=test - -test_query_count=100 and get a quick result. But this won't be accepted as official one.
  2. Yes, minimum 24576 inputs need to be run for llama2. Accuracy value also changes if we run lower number of inputs. For this reason, no one has tried to do a llama2-70b submission on CPUs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants