Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark improvements #2332

Open
bdemann opened this issue Dec 18, 2024 · 1 comment
Open

Benchmark improvements #2332

bdemann opened this issue Dec 18, 2024 · 1 comment

Comments

@bdemann
Copy link
Member

bdemann commented Dec 18, 2024

To improve the accuracy and reliability of our benchmarks, we could explore the following approaches:

  1. Filtering dissimilar methods: Ensure we are comparing apples to apples by filtering out methods that are not similar between the two benchmarks. By aligning the number and type of method executions, we can get a clearer picture of how the benchmarks are performing.

  2. Unified benchmarking canister: Create a single canister that contains a diverse set of operations. Benchmarks would be run solely on this canister to evaluate performance across different versions of Azle. This would provide a consistent and controlled comparison.

@bdemann
Copy link
Member Author

bdemann commented Dec 18, 2024

Approach 2 has the advantage of allowing us to run the benchmarks multiple times and calculate an average across those runs. This would provide a more reliable and accurate measure of how each version is performing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant