Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where do we store the output of the benchmark for the current main? #73

Closed
bentaculum opened this issue Oct 31, 2023 · 3 comments
Closed
Labels
question Further information is requested

Comments

@bentaculum
Copy link
Contributor

I'm wondering where we are storing the results of the benchmarks for the current main, I'm a bit lost.
I would like to look at the table that the benchmark workflow produces, supposedly named output.json: https://github.com/Janelia-Trackathon-2023/traccuracy/blob/main/.github/workflows/benchmark-report.yml

Things I do find:

@bentaculum bentaculum added the question Further information is requested label Oct 31, 2023
@cmalinmayor
Copy link
Collaborator

@msschwartz21 I think this is for you

@msschwartz21
Copy link
Collaborator

The benchmarking data from the workflows that run on main are saved in data.js on the gh-pages branch. For example, I think this is the start of the data entry from the most recent merge #59:

"commit": {
"author": {
"email": "[email protected]",
"name": "Caroline Malin-Mayor",
"username": "cmalinmayor"
},
"committer": {
"email": "[email protected]",
"name": "GitHub",
"username": "web-flow"
},
"distinct": true,
"id": "ed2b7b111346cf8deef0e03bb7d68754cfd3fa84",
"message": "Merge pull request #59 from bentaculum/faster_edge_errors\n\nSpeed up CTC edge errors",
"timestamp": "2023-10-30T10:14:34-04:00",
"tree_id": "8ea5f46158d6cfffd4e6a52ad69a56ee976f7be2",
"url": "https://github.com/Janelia-Trackathon-2023/traccuracy/commit/ed2b7b111346cf8deef0e03bb7d68754cfd3fa84"
},
"date": 1698690262813,
"tool": "pytest",
"benches": [
{
"name": "tests/bench.py::test_load_gt_data",
"value": 0.4388150216862543,
"unit": "iter/sec",
"range": "stddev: 0",
"extra": "mean: 2.27886455700002 sec\nrounds: 1"
},
{
"name": "tests/bench.py::test_load_pred_data",
"value": 0.5370584895997171,
"unit": "iter/sec",
"range": "stddev: 0",
"extra": "mean: 1.8619945859999802 sec\nrounds: 1"
},
{
"name": "tests/bench.py::test_ctc_matched",
"value": 0.2849398661325083,
"unit": "iter/sec",
"range": "stddev: 0",
"extra": "mean: 3.509512423000018 sec\nrounds: 1"
},
{
"name": "tests/bench.py::test_ctc_metrics",
"value": 1.1219872327299945,
"unit": "iter/sec",
"range": "stddev: 0",
"extra": "mean: 891.2757390000081 msec\nrounds: 1"
},
{
"name": "tests/bench.py::test_ctc_div_metrics",
"value": 1.9930738868734224,
"unit": "iter/sec",
"range": "stddev: 0.07131322148458548",
"extra": "mean: 501.73754549999217 msec\nrounds: 2"
},
{
"name": "tests/bench.py::test_iou_matched",
"value": 0.047679493896834735,
"unit": "iter/sec",
"range": "stddev: 0",
"extra": "mean: 20.973376985999977 sec\nrounds: 1"
},
{
"name": "tests/bench.py::test_iou_div_metrics",
"value": 1.900133877162481,
"unit": "iter/sec",
"range": "stddev: 0.0630087197371194",
"extra": "mean: 526.2787070000172 msec\nrounds: 2"
}
The same data is displayed in the "Store benchmark results" part of the action.

This action doesn't explicitly generate any kind of diff table like we do on the PR workflow, but that's something we could add if needed.

@bentaculum
Copy link
Contributor Author

Ok thanks. I think what we have in place is already great and will help us to avoid introducing slowdowns. If we realize that we are working on improving performance a lot we might want to come back to this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
Status: Done
Development

No branches or pull requests

3 participants