Skip to content

Testing Strategy

Mark Silva edited this page Sep 11, 2017 · 9 revisions

Project Overview

This project consists of

  • A Mako-based code generator that given Python metadata creates Python bindings for a NI Modular Instruments driver (i.e. NI-SCOPE).
  • Per-driver metadata split into two parts:
    • One is code-generated and supplied by NI using internal tools. While this is submitted to GitHub, it is not meant to be manually edited by developers.
    • Second is "add-on" supplementary metadata that is specific to the Python bindings. This is manually created and edited by developers.

Testing Goals

  • Use both unit tests and system tests to provide comprehensive coverage
    • Unit tests test units of code in isolation. Interaction of each unit with the rest of the system is mocked.
    • System tests test the code installed as it is used in production.
  • Minimize the barrier of entry for running tests
    • Unit tests:
      • Run on any OS
      • Do not need NI software to be installed
      • Run as part of the build
    • System tests:
      • a.k.a. Integration testing
      • Call into driver runtimes, thus need NI software to be installed
        • This limits OS support to Windows
      • Use device simulation, thus will not require NI hardware
  • Avoid redundant coverage
    • Do not test driver runtimes, just our interaction with them
    • System tests do not re-test anything covered by unit tests

Unit tests

NI-FAKE

We have created Metadata for a fake driver called NI-FAKE (Python module nifake). The metadata contains functions and attributes covering each scenario that the code-generator needs to handle, such as:

  • Attributes for each type
  • Functions that return multiple values
  • Returning buffers of different types
  • Returning buffers using different memory allocation strategies
  • Functions with and without channel parameters
  • Etc.

The build generates NI-FAKE Python bindings just like it does for NI-DMM or NI-SWITCH... The purpose of the NI-FAKE Python bindings is to give us a single driver we can unit test that captures all the scenarios that the code generator must be able to handle.

This means that testing the Python bindings for the rest of the drivers is redundant. It also means that we can use code-coverage tools on NI-FAKE unit tests, because the metadata will be succinct yet complete.

mock_helper.py (204)

As part of our code-generation, a class for each driver is generated that handles the function calls into the driver runtime. This class is library.Library and uses ctypes in its implementation. This object is what we mock or "stub out" when unit testing.

In order to aid this, we also generate helper code in mock_helper.py. It aids setting up expectation and side effects when mocking library.Library.

System Tests

System Tests are to be written for all the supported Modular Instruments drivers. Their intent is to validate that the corresponding Python bindings:

  • Can load the driver runtime
  • Can call into driver runtime correctly
  • Use simulated devices
  • Don't have any errors in function metadata
    • Function calls go through correctly
    • Signature of public Python API is correct
  • Attribute metadata doesn't not need to be validated here
    • It is fully code-generated
    • It is provided by NI
    • It is validated by NI's internal testing of the drivers

System tests run when a pull request is generated. The results will be posted to the pull request "check" section. These tests currently run on a system that is behind the National Instruments firewall so results for failing tests must be manually posted to the PR that failed.

You can run system tests manually as well. The system they are executed on must have the driver runtime installed.

python3 -m pytest src/nidmm/system_tests/
Clone this wiki locally