Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Header Generation Benchmark in CI #251

Merged
merged 61 commits into from
Nov 4, 2023
Merged

Header Generation Benchmark in CI #251

merged 61 commits into from
Nov 4, 2023

Conversation

fmiguelgarcia
Copy link
Collaborator

@fmiguelgarcia fmiguelgarcia commented Oct 3, 2023

Pull Request type

Please add the labels corresponding to the type of changes your PR introduces:

  • Feature
  • Bugfix
  • Refactor
  • Format
  • Documentation
  • Testing
  • Other: CI

Description

It adds a couple of performance benchmarks to the CI, so any performance regression greater than 15% will be marked as a failure. The initial benchmarks are measuring the Kate commitment generation over different number of columns (like, 32, 64,...256).
There are 2 methods for performance measurements:

  • Using Criterion benchmarks, running on our AWS reference machine.
  • Using IAI, which is an " experimental benchmarking harness that uses Cachegrind to perform extremely precise single-shot measurements ... it's useful for reliable benchmarking in CI"

Related Issues

Testing Performed

Checklist

  • I have performed a self-review of my own code.
  • The tests pass succesfully with cargo test.
  • The code was formatted with cargo fmt.
  • The code compiles with no new warnings with cargo build --release and cargo build --release --features runtime-benchmarks.
  • The code has no new warnings when using cargo clippy.
  • If this change affects documented features or needs new documentation, I have created a PR with a documentation update.

@fmiguelgarcia fmiguelgarcia self-assigned this Oct 3, 2023
@fmiguelgarcia fmiguelgarcia changed the title Add valgrind to Github env [WIP] Header Generation Benchmark in CI Oct 3, 2023
@markopoloparadox
Copy link
Contributor

markopoloparadox commented Oct 3, 2023

@fmiguelgarcia
Not sure if we should use iai.
The project is not maintained anymore (last commit is from 2021) and it doesn't work with newer versions on valgrind (>3.18)
bheisler/iai#37
bheisler/iai#36

Is there anything wrong with Criterion?

Besides this, I got the feeling that we should have two separate tasks. One for implementing the benchmarks
and the second one for the CI (which would include running the benchmarks and store the results inside a db)

@codecov-commenter
Copy link

codecov-commenter commented Oct 3, 2023

Codecov Report

Attention: 329 lines in your changes are missing coverage. Please review.

Comparison is base (08854a7) 42.91% compared to head (c88616a) 42.57%.
Report is 48 commits behind head on main.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #251      +/-   ##
==========================================
- Coverage   42.91%   42.57%   -0.34%     
==========================================
  Files          88       88              
  Lines       12914    13038     +124     
==========================================
+ Hits         5542     5551       +9     
- Misses       7372     7487     +115     
Files Coverage Δ
node/src/chains/common.rs 100.00% <100.00%> (ø)
node/src/chains/mod.rs 100.00% <100.00%> (ø)
node/src/cli.rs 0.00% <ø> (ø)
node/src/da_block_import.rs 0.00% <ø> (ø)
pallets/bridges/nomad/home/src/benchmarking.rs 100.00% <ø> (ø)
pallets/dactr/src/lib.rs 80.48% <100.00%> (-4.30%) ⬇️
pallets/dactr/src/mock.rs 96.61% <100.00%> (-0.48%) ⬇️
pallets/system/src/header_builder.rs 84.74% <100.00%> (-0.13%) ⬇️
pallets/system/src/offchain.rs 76.58% <ø> (ø)
runtime/src/apis.rs 0.00% <ø> (ø)
... and 8 more

... and 1 file with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@fmiguelgarcia fmiguelgarcia added the CI Improvements or additions to CI label Oct 31, 2023
@fmiguelgarcia fmiguelgarcia changed the title [WIP] Header Generation Benchmark in CI Header Generation Benchmark in CI Oct 31, 2023
@fmiguelgarcia
Copy link
Collaborator Author

As you suggested, I've also added Criterion bench-marking. It is executed inside our AWS reference machine, in order to limit divergences between executions.
I initially added IAI because its execution using Cachegrind is perfect for the CI due to the stability of the measurements, independently of the current load. Please take a look into Criterion doc about IAI.
In any case, the benchmark github action supports several benchmarking tools, and it really easy to add new ones.
As example, we could add substrate weight benchmarking too, so it would warranty no performance regression in the cost of extrinsics between releases.

att. @markopoloparadox

@fmiguelgarcia Not sure if we should use iai. The project is not maintained anymore (last commit is from 2021) and it doesn't work with newer versions on valgrind (>3.18) bheisler/iai#37 bheisler/iai#36

Is there anything wrong with Criterion?

Besides this, I got the feeling that we should have two separate tasks. One for implementing the benchmarks and the second one for the CI (which would include running the benchmarks and store the results inside a db)

@markopoloparadox markopoloparadox merged commit 33f7951 into main Nov 4, 2023
10 checks passed
@fmiguelgarcia fmiguelgarcia deleted the miguel/bench_ci branch November 6, 2023 08:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI Improvements or additions to CI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants