Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytest Framework for Model Testing #30

Open
jdkleiner opened this issue Apr 4, 2023 · 0 comments
Open

Pytest Framework for Model Testing #30

jdkleiner opened this issue Apr 4, 2023 · 0 comments

Comments

@jdkleiner
Copy link
Member

jdkleiner commented Apr 4, 2023

Pytest is a Python testing framework, used for building automated tests

  • Pytest can be used for Unit Testing. Unit Testing methods focus on testing individual source code units (usually functions) to ensure they function as intended (This is often used in conjunction with continuous integration (CI) and code coverage)
  • Pytest can also be used for testing models. For our purposes testing model run success, we need to perform "sanity checks" rather than regular unit tests (since the output from every model run will be different)
    • We can build tests which check whether model results meet certain criteria
  • Things we may want to test:
    • is Qout a positive float value?
    • is wd_mgd a positive float value?
    • do the number of timesteps match expected?
    • is the water balance sound?
    • ...

Process for setting up tests

  • pip install pytest (I've done this on deq2)
  • tests can live in meta_model/tests/[file_containing_test].py
  • Initial testing: created new branch auto_tests with the file hydr_test.py

Initial testing:

  • Manually running the tests for a single rseg
  • note: The -m flag makes sure that you are using the pytest package that's tied to the active Python executable
jkleiner@deq2:/opt/model/p6/vadeq$ python3 -m pytest ../../meta_model/tests/hydr_test.py -W ignore::DeprecationWarning -v
=========================================================== test session starts ===========================================================
platform linux -- Python 3.8.10, pytest-7.2.2, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /opt/model
collected 4 items                                                                                                                         

../../meta_model/tests/hydr_test.py::test_for_positive_qout PASSED                                                                  [ 25%]
../../meta_model/tests/hydr_test.py::test_for_positive_withdrawal PASSED                                                            [ 50%]
../../meta_model/tests/hydr_test.py::test_for_number_of_records PASSED                                                              [ 75%]
../../meta_model/tests/hydr_test.py::test_water_balance PASSED                                                                      [100%]

============================================================ 4 passed in 0.62s ============================================================

Remaining development needed:

  • Try passing parameters into hydr_test.py (right now scenario, seg, and CBP_EXPORT_DIR are hard-coded for testing)
  • Where/when should the tests get executed?
    • One option: trigger the tests to run somewhere in meta_model/models/hsp2_cbp6/river/analyze/
    • The params can be passed to the tests similar to 04_hydr_metrics
  • What to do with the test results (i.e. PASSED or FAILED)
    • Just have it output the results to the console?
    • Store results in a text file, the path to that file can be saved as a property on the rseg run scenario?
    • Store as individual properties for each test on the run scenario?

@rburghol

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant