You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 26, 2021. It is now read-only.
Currently we have a limited set of repeatable benchmarks and we are using assumptions to extrapolate how our results compare with other results obtained from different environments. The more we add to our repeatable benchmarks the more confident we can be about the validity of results and progress.
Here are several dimension we talked about adding:
Non-raw, e.g. EF Core and Dapper on our mini-benchmark
Non-.NET tests that we can run alongside our mini-benchmarks to make sure we are comparing actual results on same hardware
SqlClient on our mini-benchmark
Include Redis case on TechEmpower and mini-benchmark
Add MongoDB on mini-benchmark (it was added already on TechEmpower)
Ability to run on physical hardware: This one is not about adding new tests or dimension about about setting up infrastructure so we can run the same tests on both "cloud" and "physical". All our results right now are only "cloud".
We have a few ideas of what to do first. @anpete and @sebastienros will pick item from this list.
The text was updated successfully, but these errors were encountered:
Currently we have a limited set of repeatable benchmarks and we are using assumptions to extrapolate how our results compare with other results obtained from different environments. The more we add to our repeatable benchmarks the more confident we can be about the validity of results and progress.
Here are several dimension we talked about adding:
We have a few ideas of what to do first. @anpete and @sebastienros will pick item from this list.
The text was updated successfully, but these errors were encountered: