Skip to content

Testing Strategy

Marcos edited this page Jan 13, 2021 · 9 revisions

Testing Goals

  • Use both unit tests and system tests to provide comprehensive automated test coverage
    • Unit tests test units of code in isolation. Interaction of each unit with the rest of the system is mocked.
    • System tests test the code installed as it is used in production.
  • Minimize the barrier of entry for running tests
    • Unit tests:
      • Run on any OS
      • Do not need NI software to be installed
      • Run as part of the build
    • System tests:
      • a.k.a. Integration testing
      • Call into driver runtimes, thus need NI software to be installed
        • This limits OS support to Windows
      • Use device simulation, thus not requiring NI hardware
  • Avoid redundant coverage
    • Do not test driver runtimes, just our interaction with them
    • System tests do not re-test anything covered by unit tests

Unit tests

NI-FAKE

We have created Metadata for a fake driver called NI-FAKE (Python module nifake). The metadata contains functions and attributes covering each scenario that the code-generator needs to handle, such as:

  • Attributes for each type
  • Functions that return multiple values
  • Returning buffers of different types
  • Returning buffers using different memory allocation strategies
  • Functions with and without channel parameters
  • Etc.

The build generates NI-FAKE Python bindings just like it does for NI-DMM or NI-SWITCH... The purpose of the NI-FAKE Python bindings is to give us a single driver we can unit test that captures all the scenarios that the code generator must be able to handle.

This means that testing the Python bindings for the rest of the drivers is redundant. It also means that we can use code-coverage tools on NI-FAKE unit tests, because the metadata will be succinct yet complete.

mock_helper.py (204)

As part of our code-generation, a class for each driver is generated that handles the function calls into the driver runtime. This class is _library.Library and uses ctypes in its implementation. This object is what we mock or "stub out" when unit testing.

In order to aid this, we also generate helper code in mock_helper.py. It aids setting up expectation and side effects when mocking library.Library.

System Tests

System Tests are to be written for all the supported Modular Instruments drivers. Their intent is to validate that the corresponding Python bindings.

They should:

  • Prove that bindings can load the driver runtime
  • Test that the correct calls make it to the driver runtime
  • Use simulated devices so that there's no requirement to have hardware in order to run
  • Verify we have no errors in function metadata
    • Function calls go through correctly
    • Signature of public Python API is correct
  • Attribute metadata does not need to be validated here
    • It is fully code-generated
    • It is provided by NI
    • It is validated by NI's internal testing of the drivers

Crucially, system tests should not be testing the behavior of the underlying driver runtime. This is outside their responsibility.

Unit tests for the code generator and for the generated NI-FAKE driver bindings are run automatically when a pull request is generated. We use the Travis CI service for this.

System tests for all supported drivers are also run when a pull request is generated. The results will be posted to the pull request "check" section. These tests will automatically run on nimi-bot You can run system tests manually as well. The system they are executed on must have the driver runtime installed.

python3 -m pytest src/nidmm/system_tests/
Clone this wiki locally