Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some testing ideas / principles #54

Closed
tommylees112 opened this issue Mar 1, 2021 · 1 comment
Closed

Some testing ideas / principles #54

tommylees112 opened this issue Mar 1, 2021 · 1 comment

Comments

@tommylees112
Copy link
Contributor

tommylees112 commented Mar 1, 2021

Ref: PR 53: Pytest to environment.yml

First thing to say is that you probably don't need to write much more code. The notebooks contain the test principles, it's just an automated way for you to run them before you push changes to master etc.

My main motivations for writing tests is that they allows me to catch silent errors, quickly identify what has gone wrong and to jump into arbitrary points in the pipeline by putting in an assert False statement and using pdb (might not be best practice but i find it's fast). e.g.

pytest --pdb .

Key things to test:

  • Shapes of input and output datasets match expectations
  • Missing values (they need to be found and dealt with)
  • Value Ranges are within expected bounds (e.g. probability 0-1)
  • Toy datasets that are very small (and allow the tests to run quickly) are really important to pass through the pipeline.
  • In the past I have written assertions that the model is "learning", i.e. losses fall after 1 or 2 epochs (but i'm not sure this is best practice because of the random potential for this not to be the case with SGD. I mean you expect it but it's not guaranteed)

Things to remember:

  • Deterministically seed the random number generator

Nice to haves:

  • I think typing is great from a documentation tool, it allows a user to know whether a function is going to have a Tensor or a List or a Dict etc.. Sometimes getting mypy to play nicely and not give any errors is a bit of a faff so i would say it is more of a general principle than necessarily having mypy checks all passing. Ultimately however that is what you want.

EDIT removed the link to hypothesis library because by hiding the inputs (they're generated by the library) it's harder to use the tests as a point for developer understanding of what's going on, easier to just generate your own test example

@tommylees112
Copy link
Contributor Author

(i did just find the hypothesis library now so may be complete overkill)

@jejjohnson jejjohnson pinned this issue Mar 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants