Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Auto Benchmarks] Precision adjustment of latency #3538

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

dailinsubjam
Copy link
Contributor

@dailinsubjam dailinsubjam commented Aug 5, 2024

Closes #<ISSUE_NUMBER>

This PR:

  • Changes the latency precision from i64 to f64.
  • Reduce error logging by increasing the frequency of callingwait_for_fn_from_orchestrator

This PR does not:

Key places to review:

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ dailinsubjam
❌ Sishan Long


Sishan Long seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Copy link
Contributor

@jparr721 jparr721 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some overall comments. At a high level, it seems like casting in each individual spot might create a bit of whack-a-mole whenever we need to update this. Avoiding instances of that type of logic might be a little cleaner.

status: tide_disco::StatusCode::BAD_REQUEST,
message: "Peer's public configs are not ready".to_string(),
});
return Ok(false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're no longer throwing an error it seems like this function should just return a boolean instead of a result? How come we've switched this?

println!("[{node_index}]: throughput: {throughput_bytes_per_sec} bytes/sec, avg_latency: {avg_latency_in_sec} sec.");
BenchResults {
partial_results: "Unset".to_string(),
avg_latency_in_sec,
num_latency,
minimum_latency_in_sec: minimum_latency,
maximum_latency_in_sec: maximum_latency,
minimum_latency_in_sec: minimum_latency as f64 / 1000.0,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just make minimum/maximum_latency f64 to avoid this whole casting business?

@@ -560,14 +561,14 @@ pub trait RunDa<
let throughput_bytes_per_sec = total_transactions_committed
* (transaction_size_in_bytes + 8)
/ total_time_elapsed_sec;
let avg_latency_in_sec = total_latency / num_latency;
let avg_latency_in_sec = (total_latency / num_latency) as f64 / 1000.0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could just make total_latency an f64 since it appears to be measuring time, and just cast num_latency

Comment on lines +483 to +485
* (metrics.num_latency as f64)
+ cur_metrics.avg_latency_in_sec * (cur_metrics.num_latency as f64))
/ (metrics.num_latency + cur_metrics.num_latency) as f64;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All this casting is a bit hard to follow. We first cast num_latency to f64, then cur_metrics' num_latency, then the combined operation of the both of them. Could we perhaps just cast them once as variables to make reading this statement easier? Right now a misplaced closing parenthesis could mess this whole statement up in a not-obvious way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants