Skip to content
This repository has been archived by the owner on Jan 3, 2024. It is now read-only.

asv benchmarks for imports and tools modules #184

Merged
merged 36 commits into from
Jul 24, 2023

Conversation

sfmig
Copy link
Contributor

@sfmig sfmig commented Jun 30, 2023

A suite of asv benchmarks for timing the main imports of the package and several functions from the tools module. Benchmarks are defined as methods of a class, with common setup and teardown functions.

Basic usage

  • First, install asv in the desired environment: pip install asv
  • To run all benchmarks locally, navigate to the directory where the asv config file is (cellfinder-core/benchmarks/asv.conf.json) and execute: asv run
  • To run a subset of benchmarks (e.g., the tensorflow prep benchmarks): asv run --bench tools.prep.PrepTF
  • The results will be saved in a newly created benchmarks/results directory as json files. These can be visualised in a static web site by running first asv publish and then asv preview.

For further details on usage and useful commands, see the benchmarks/README.md file.

Structure

The structure follows the approach from the astropy-benchmarks repo, in which the benchmarks modules roughly mimic the package modules. The numpy benchmarks follow a slightly different structure but it is also a useful reference.

mypy fixes

To avoid mypy errors:

  • I added one of the benchmarks file to the mypy exclude section in the pre-commit config, and
  • added cellfinder_core.tools.prep.* to the list of modules from which ignore import errors.
    Not sure if this is best practice but I couldn't find a better way around it. Happy to get feedback on it!

Existing benchmarks

I moved a set of previous memory benchmarks written by @dstansby to a subdir in benchmarks called mem_benchmarks.

Copy link
Member

@alessandrofelder alessandrofelder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great start! 🎉
I've managed to run the benchmarks locally with ease, thanks to the instructions 🙏

I've tried to give sensible answers to the great questions 😅 I'll feed back on the mypy part separately after some more investigations 🤔

I think the key thing missing is a benchmark for the main function (as discussed)?

IIUC these benchmarks are not part of the CI yet - is it worth opening an issue for that (or is there one already) to discuss the details of what we should compare benchmarks to? (And that seems like a good next step after this?)

Minor point: I wonder whether instead of putting questions for reviewers into the code itself and risking that we inadvertently merge them, should they in future be added as comments on the PR? (the disadvantage would be that it's more annoying to keep track of the questions for the person opening the PR... don't know).

benchmarks/README.md Outdated Show resolved Hide resolved
benchmarks/README.md Outdated Show resolved Hide resolved
benchmarks/README.md Outdated Show resolved Hide resolved
benchmarks/README.md Outdated Show resolved Hide resolved
benchmarks/README.md Outdated Show resolved Hide resolved
benchmarks/benchmarks/tools/IO.py Outdated Show resolved Hide resolved
benchmarks/benchmarks/tools/IO.py Outdated Show resolved Hide resolved
benchmarks/benchmarks/imports.py Show resolved Hide resolved
benchmarks/benchmarks/tools/prep.py Outdated Show resolved Hide resolved
benchmarks/benchmarks/tools/prep.py Outdated Show resolved Hide resolved
@adamltyson
Copy link
Member

Thanks @sfmig!

Minor point: I wonder whether instead of putting questions for reviewers into the code itself and risking that we inadvertently merge them, should they in future be added as comments on the PR?

Yes please add all comments to the PR itself rather than in the code. And create issues rather than TODO: etc. It's a bit of a pain for the developer, but much easier for everyone else to keep track.

@alessandrofelder
Copy link
Member

On mypy I've done some thinking and discussing with @willGraham01 (thanks!). We understood that the import errors are caused by the benchmarks code being outside the cellfinder_core folder, so mypy treats cellfinder_core like a third-party library.

I would suggest to ignore any import errors happening in benchmarks/ by excluding them in pyproject.toml (until we fix brainglobe/cellfinder#278 which is not high-priority).

MANIFEST.in Outdated Show resolved Hide resolved
Copy link
Contributor Author

@sfmig sfmig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback @alessandrofelder! Sorry that this PR was a bit longer than expected

I think the key thing missing is a benchmark for the main function (as discussed)?

Yes, I added an issue for it #206

IIUC these benchmarks are not part of the CI yet - is it worth opening an issue for that (or is there one already) to discuss the details of what we should compare benchmarks to? (And that seems like a good next step after this?)

Agreed! This is now issue #208 (I added these as cellfinder-core issues for now, with a view to expanding this work to other brainglobe tools later)

Minor point: I wonder whether instead of putting questions for reviewers into the code itself and risking that we inadvertently merge them, should they in future be added as comments on the PR?

Yes, that sounds like a better approach. Maybe opening a draft PR as soon as possible and adding my questions as they come to the main description of the PR is the least cumbersome? Will have a go. Also will aim to open issues rather than TODOs, thanks @adamltyson.

benchmarks/README.md Show resolved Hide resolved
benchmarks/README.md Outdated Show resolved Hide resolved
MANIFEST.in Outdated Show resolved Hide resolved
benchmarks/benchmarks/imports.py Show resolved Hide resolved
benchmarks/benchmarks/tools/IO.py Outdated Show resolved Hide resolved
benchmarks/benchmarks/tools/IO.py Outdated Show resolved Hide resolved
Copy link
Member

@alessandrofelder alessandrofelder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, @sfmig - thank you!

@alessandrofelder alessandrofelder merged commit 90c6c25 into main Jul 24, 2023
15 of 16 checks passed
@alessandrofelder alessandrofelder deleted the smg/basic-asv-benchmark branch July 24, 2023 09:02
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants