Skip to content

Testing Strategy

Marcos edited this page Oct 31, 2017 · 9 revisions

Testing Goals

  • Use both unit tests and system tests to provide comprehensive automated test coverage
    • Unit tests test units of code in isolation. Interaction of each unit with the rest of the system is mocked.
    • System tests test the code installed as it is used in production.
  • Minimize the barrier of entry for running tests
    • Unit tests:
      • Run on any OS
      • Do not need NI software to be installed
      • Run as part of the build
    • System tests:
      • a.k.a. Integration testing
      • Call into driver runtimes, thus need NI software to be installed
        • This limits OS support to Windows
      • Use device simulation, thus not requiring NI hardware
  • Avoid redundant coverage
    • Do not test driver runtimes, just our interaction with them
    • System tests do not re-test anything covered by unit tests

Unit tests

NI-FAKE

We have created Metadata for a fake driver called NI-FAKE (Python module nifake). The metadata contains functions and attributes covering each scenario that the code-generator needs to handle, such as:

  • Attributes for each type
  • Functions that return multiple values
  • Returning buffers of different types
  • Returning buffers using different memory allocation strategies
  • Functions with and without channel parameters
  • Etc.

The build generates NI-FAKE Python bindings just like it does for NI-DMM or NI-SWITCH... The purpose of the NI-FAKE Python bindings is to give us a single driver we can unit test that captures all the scenarios that the code generator must be able to handle.

This means that testing the Python bindings for the rest of the drivers is redundant. It also means that we can use code-coverage tools on NI-FAKE unit tests, because the metadata will be succinct yet complete.

mock_helper.py (204)

As part of our code-generation, a class for each driver is generated that handles the function calls into the driver runtime. This class is library.Library and uses ctypes in its implementation. This object is what we mock or "stub out" when unit testing.

In order to aid this, we also generate helper code in mock_helper.py. It aids setting up expectation and side effects when mocking library.Library.

System Tests

System Tests are to be written for all the supported Modular Instruments drivers. Their intent is to validate that the corresponding Python bindings:

  • Can load the driver runtime
  • Can call into driver runtime correctly
  • Use simulated devices
  • Don't have any errors in function metadata
    • Function calls go through correctly
    • Signature of public Python API is correct
  • Attribute metadata doesn't not need to be validated here
    • It is fully code-generated
    • It is provided by NI
    • It is validated by NI's internal testing of the drivers

Build and unit tests for all supported Python interpreters are run when a pull request is generated. We use the Travis CI service for this.

System tests are also run when a pull request is generated. The results will be posted to the pull request "check" section. These tests currently run on nimi-bot, Windows system that is behind the National Instruments firewall so results for failing tests must be manually posted to the PR that failed.

You can run system tests manually as well. The system they are executed on must have the driver runtime installed.

python3 -m pytest src/nidmm/system_tests/
Clone this wiki locally