You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add a benchmark for the main function (detect+classify)
For it, try fetching larger data (larger than the test data in the repo, but not as large as a 100% realistic scenario) until the benchmarking time is reasonable:
increase the size of the data in benchmarks gradually
use GIN and fetch with pooch - this PR may be a good example to follow
smallest dataset that is realistic is 100GB
typically 10mins?
The text was updated successfully, but these errors were encountered:
This issue was migrated from the cellfinder repo, where we initially started the benchmarking work following a "modular" approach (i.e., benchmarking individual functions, rather than a workflow). But these comments are slightly outdated now. I think the comments on the size of the data came from a discussion on how to determine what is a small/large dataset.
Add a benchmark for the main function (detect+classify)
For it, try fetching larger data (larger than the test data in the repo, but not as large as a 100% realistic scenario) until the benchmarking time is reasonable:
pooch
- this PR may be a good example to followThe text was updated successfully, but these errors were encountered: