You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To improve the accuracy and reliability of our benchmarks, we could explore the following approaches:
Filtering dissimilar methods: Ensure we are comparing apples to apples by filtering out methods that are not similar between the two benchmarks. By aligning the number and type of method executions, we can get a clearer picture of how the benchmarks are performing.
Unified benchmarking canister: Create a single canister that contains a diverse set of operations. Benchmarks would be run solely on this canister to evaluate performance across different versions of Azle. This would provide a consistent and controlled comparison.
The text was updated successfully, but these errors were encountered:
Approach 2 has the advantage of allowing us to run the benchmarks multiple times and calculate an average across those runs. This would provide a more reliable and accurate measure of how each version is performing.
To improve the accuracy and reliability of our benchmarks, we could explore the following approaches:
Filtering dissimilar methods: Ensure we are comparing apples to apples by filtering out methods that are not similar between the two benchmarks. By aligning the number and type of method executions, we can get a clearer picture of how the benchmarks are performing.
Unified benchmarking canister: Create a single canister that contains a diverse set of operations. Benchmarks would be run solely on this canister to evaluate performance across different versions of Azle. This would provide a consistent and controlled comparison.
The text was updated successfully, but these errors were encountered: