Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark results processing and deployment #1

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

cz4rs
Copy link
Contributor

@cz4rs cz4rs commented Jan 5, 2023

fixes kokkos/kokkos#5464

Process benchmark results and deploy generated pages.
Deployment targets deploy-benchmarks branch of this repository and should make the webpage appear at https://kokkos.github.io/kokkos-benchmark-results/.

  • repository settings to enable GitHub pages deployment:

image

@cz4rs cz4rs force-pushed the process-benchmark-results branch 8 times, most recently from c0cabbf to 1ef4975 Compare January 11, 2023 12:22
- name: Checkout processing script
uses: actions/checkout@v3
with:
repository: cz4rs/benchmark_monitor
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
repository: cz4rs/benchmark_monitor
repository: kokkos/benchmark_monitor

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Original script: https://github.com/bensanmorris/benchmark_monitor
I have modified / customized it for our use case here: https://github.com/cz4rs/benchmark_monitor

I propose that we fork the original thing and I will deliver my changes with a pull request.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree on forking this inside kokkos is the best option

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking back at this, I think this whole step should be removed and benchmark_monitor.py script (plus the templates and requirements.txt) should be added to a subdirectory in this repo.

@cz4rs cz4rs force-pushed the process-benchmark-results branch 4 times, most recently from 95d36dc to 8e42a88 Compare January 11, 2023 12:36
@cz4rs cz4rs marked this pull request as ready for review January 11, 2023 15:55
@cz4rs cz4rs force-pushed the process-benchmark-results branch 3 times, most recently from f7a5d2e to 886bbe5 Compare January 11, 2023 16:33
@cz4rs cz4rs requested a review from dalg24 January 11, 2023 16:34
@cz4rs cz4rs force-pushed the process-benchmark-results branch 4 times, most recently from 18ccb52 to ded96e0 Compare January 13, 2023 18:15
@cz4rs cz4rs changed the title Add results processing Add benchmark results processing and deployment Jan 13, 2023
README.md Show resolved Hide resolved
Copy link

@fnrizzi fnrizzi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see my minor comment

@cz4rs cz4rs requested a review from fnrizzi February 7, 2023 12:47
@cz4rs cz4rs force-pushed the process-benchmark-results branch 2 times, most recently from 9808512 to 82ad433 Compare February 8, 2023 12:59
@cz4rs cz4rs force-pushed the process-benchmark-results branch from 788b20d to 713c54b Compare March 8, 2023 19:45
@cz4rs cz4rs force-pushed the process-benchmark-results branch from 713c54b to 20181d8 Compare April 21, 2023 15:28
@janciesko
Copy link

Did ever get back to this?

@cz4rs
Copy link
Contributor Author

cz4rs commented Oct 15, 2024

Did ever get back to this?

I haven't tested it since the last push (Apr 2023), but functionally this was ready to go at the time. The only change I would make is to simply add benchmark_monitor.py script to this repository instead of checking it out (from a separate repo). The demo at https://cz4rs.github.io/kokkos-benchmark-results/ is still available.

Main drawback is that the results from GitHub runners are no that great (not an isolated environment for reliable performance measurement). IIRC there were talks about having something similar run on Jenkins (with proper isolation and CUDA enabled).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Tracking performance testing - collect and analyze results of benchmark runs
3 participants