-
Notifications
You must be signed in to change notification settings - Fork 539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Header Generation Benchmark in CI #251
Conversation
@fmiguelgarcia Is there anything wrong with Criterion? Besides this, I got the feeling that we should have two separate tasks. One for implementing the benchmarks |
Codecov ReportAttention:
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #251 +/- ##
==========================================
- Coverage 42.91% 42.57% -0.34%
==========================================
Files 88 88
Lines 12914 13038 +124
==========================================
+ Hits 5542 5551 +9
- Misses 7372 7487 +115
☔ View full report in Codecov by Sentry. |
As you suggested, I've also added att. @markopoloparadox
|
Pull Request type
Please add the labels corresponding to the type of changes your PR introduces:
Description
It adds a couple of performance benchmarks to the CI, so any performance regression greater than 15% will be marked as a failure. The initial benchmarks are measuring the Kate commitment generation over different number of columns (like,
32
,64
,...256
).There are 2 methods for performance measurements:
Related Issues
Testing Performed
Checklist
cargo test
.cargo fmt
.cargo build --release
andcargo build --release --features runtime-benchmarks
.cargo clippy
.