Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a mechanism for performance benchmarking #60

Closed
msschwartz21 opened this issue Sep 25, 2023 · 1 comment
Closed

Add a mechanism for performance benchmarking #60

msschwartz21 opened this issue Sep 25, 2023 · 1 comment

Comments

@msschwartz21
Copy link
Collaborator

msschwartz21 commented Sep 25, 2023

As mentioned in #59 and #57, it's currently difficult to identify changes that are leading to performance regression. I spoke to a labmate who suggested two possible packages for implementing some form of performance benchmarking:

However there is a major caveat which is that running performance benchmarks as a part of cloud based CI can lead to highly variable results and potential false positives just as a function of what the compute resources were at the time of benchmarking.

My suggestion would be to implement a pytest-benchmark routine that we run on CI, but also run locally when we are concerned about a particular change leading to performance differences.

I'll probably take a stab at implementing something next week when I work with Caroline. Let me know if you have any thoughts on this approach, esp @DragaDoncila @bentaculum

@msschwartz21
Copy link
Collaborator Author

Closed by #62 and #64

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

1 participant