-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test for visualisations #147
Labels
Comments
Kirstie suggestion during the meeting:
Actually, that's the same solution as the alternative solution I mentioned in the issue. It is agreed to use the 1st solution for scona. Some discussions about the pros and cons. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Heya,
I would like to note here the way how to test visualizations.
After some googling (1), (2), I've decided that a good solution would be to use py.test plugin to validate Jupyter notebooks.
Comparing cell-output while testing with the stored one in a notebook will cause all cells to fail because the output information contains the memory address where the figure is stored (obviously, this is the unique value so it can't be compared).
But we can make sure that the notebooks (with visualisations) are running without errors.
The only drawback I see is adding a new package-requirement - nbval.
The alternative solution to consider is mentioned here. The idea is to add the following test
Alternative test
I prefer the 1st solution - use the py.test extension.
By the way, do we care about the time needed for Travis to run tests? I mean, in both cases, I will create a jupyter notebook with visualisations, but I can not figure out the "length" of this notebook. Should I include all the possible ways how to call any visualisation function? Or including a few different calls (around 5-7) of every function would be sufficient?
Sources:
The text was updated successfully, but these errors were encountered: