You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are defining workflows that are most representative of the users experience with the idea of benchmarking them.
However some functions are not part of the basic workflows (e.g. loading the output XML file, containing the result of the analysis).
Should we include these as simpler workflows here (like a XML loading one) or should we benchmark these modules / functions individually (maybe following the structure of the Python modules as in our initial asv work)?
The text was updated successfully, but these errors were encountered:
It would be great if these could be benchmarked, but it's maybe not a top priority. If they're straightforward to do though, then great. Ideally as much would be benchmarked in the repo (all the quick stuff), as with your previous asv work.
We are defining workflows that are most representative of the users experience with the idea of benchmarking them.
However some functions are not part of the basic workflows (e.g. loading the output XML file, containing the result of the analysis).
Should we include these as simpler workflows here (like a XML loading one) or should we benchmark these modules / functions individually (maybe following the structure of the Python modules as in our initial
asv
work)?The text was updated successfully, but these errors were encountered: