Skip to content

Latest commit

 

History

History
206 lines (153 loc) · 9.37 KB

CONTRIBUTING_REFIMPL.adoc

File metadata and controls

206 lines (153 loc) · 9.37 KB

Contributing Reference Implementations

This document covers contributing a reference implementation for a test case.

(Skip to the end to for An example reference implementation.)

Writing a reference implementation

The exact content of a reference implementation is specific to the test case. Existing reference implementations are a source of inspiration.

Each reference implementation must be in the same directory as the test specification (testspec.adoc). It is ok for the reference implementation to be a symbolic link to a shared implementation in a different directory.

For example:

Reference implementations must:

  • Be appropriately documented

  • Be runnable from a shell/the command line

  • Take test input parameters from an associated configuration file (see Test input parameters)

  • Print results as a line of JSON-encoded text to stdout (see Test output)

  • Exit with code zero if a result was produced, non-zero otherwise

Test input parameters

Test input parameters must be specified in an associated configuration file named config.yaml. This file must be supplied in the same directory as the reference implementation.

An example of a reference implementation with its configuration file:

Test output

A reference implementation must print its results as a line of JSON-encoded text. The JSON value must be a JSON object:

  • including a pair at key "result" with a value of:

    • true on test success;

    • false on test failure; or,

    • "error" indicating failure to produce a result

  • including a pair at key "reason" with a value of:

    • a string providing any qualification of the result; or,

    • null (indicating no qualification of the result)

  • optionally including a pair at key "timestamp" with a value of:

    • a string capturing when the test was started (see timestamp values);

    • a non-negative number, representing a relative timestamp in seconds; or,

    • null (indicating no timestamp)

  • optionally including a pair at key "duration" with a value of:

    • a non-negative number, the duration of the test in seconds; or,

    • null (indicating no duration)

  • optionally including a pair at key "analysis" with a value of:

    • an object providing any relevant structured summarization of the result

All other key/value pairs are reserved for future use.

timestamp values

This section prescribes the format of a timestamp string, in order to promote consistency and avoid ambiguity.

A timestamp string is an ISO 8601 date-time string representing a UTC time value as per the RFC 3339 date-time ABNF rule.

For example, using the GNU date command:

$ date -u -Iseconds
2023-09-07T08:05:57+00:00

Plotting test data

A reference implementation may be accompanied by a script to plot test data. If supplied, the script must:

  • Be in the same directory as the corresponding reference implementation

    • (It is ok for the script to be a symbolic link to a shared implementation.)

  • Have filename plot.py

  • Be appropriately documented

  • Be runnable from a shell/the command line

  • (If needed) use the same test input parameters as the reference implementation (taken from an associated configuration file, see Test input parameters)

  • Take an output filename prefix as its first command line argument

  • Take extra command line arguments which exactly match the sequence of command line arguments supplied to the reference implementation

  • Print files output as JSON-encoded text to stdout (see Image files output)

  • Exit with code non-zero on error, zero otherwise

Image files output

A script for plotting test data must print its results as a line of JSON-encoded text. The JSON value must be a JSON array:

  • including an array item for each file output

  • with each array item being an object with pairs for:

    • path, the path to the image file output

    • title, a string title for the image

Manage dependencies

git submodules are used to manage the dependencies this repository has on other repositories. Before raising a PR, ensure you have tested with the submodule versions on this repository’s main branch.

To pull the latest changes present in dependent submodules:

$ git submodule update --init --recursive

Publishing a reference implementation

A reference implementation is considered (publicly) published when the following conditions become true for the first time:

  • The corresponding test specification is published (see Publishing a test specification)

  • The reference implementation is on the main branch of this repository

Changing a reference implementation

Once published, a reference implementation is closed to modification.

(Conceptually, a published reference implementation can be "changed" by providing a new version of it with a new version of the test specification. How new versions of a test are to be handled is not yet defined.)

Changes to unpublished reference implementations can be accepted.

Submitting a PR

In addition to common good PR hygiene practices, a PR containing reference implementations must satisfy the following:

  • The PR must provide sufficient context for it to be reviewed as a single unit

  • The PR must contain a coherent set of new and changed test cases

  • No changes to published tests

  • Each reference implementation must be in the same directory as the test specification (testspec.adoc) for the test case

  • Each reference implementation must have an associated configuration file config.yaml in the same directory defining test input parameters

  • Each reference implementation must provide example data files demonstrating test success ("result": true)

  • Each reference implementation must provide example data files demonstrating test failure ("result": false)

  • Optionally, a reference implementation can provide example data files demonstrating error denoting failure to produce a result ("result": "error")

Each reference implementation should ideally follow consistent practices with existing similar reference implementations (by language, type of test, …​)

An example reference implementation

The example demonstrated here is this reference implementation (which is actually a symbolic link to this shared Python implementation).

This reference implementation takes lines of JSON data as input. This is an example of data produced by a collector such as vse-sync-collection-tools. Alternatively your reference implementation could use system log files, for example those produced by linux-ptp-daemon-container.

(In all cases, it is the reference implementation’s responsibility to ensure it parses and handles its input data correctly.)

Example log files for test success and failure are provided alongside the test cases:

The following demonstrates test success with this reference implementation:

$ cd tests/sync/G.8272/time-error-in-locked-mode/1PPS-to-DPLL/PRTC-A
$ PPATH=../../../../../../vse-sync-pp/src
$ PYTHONPATH=$PPATH python3 testimpl.py  ../examples/dpll-PRTCA-accept.dat
{"result": true, "reason": null, "timestamp": 1877172.99, "duration": 2458.27, "analysis": {"terror": {"units": "ns", "min": -5.14, "max": 5.58, "range": 10.72, "mean": -0.001, "stddev": 2.453, "variance": 6.016}}}

The following demonstrates test failure with the same reference implementation but different data:

$ cd tests/sync/G.8272/time-error-in-locked-mode/1PPS-to-DPLL/PRTC-A
$ PPATH=../../../../../../vse-sync-pp/src
$ PYTHONPATH=$PPATH python3 testimpl.py  ../examples/dpll-reject.dat
{"result": false, "reason": "short test duration", "timestamp": "2023-06-21T16:19:51+00:00", "duration": 475.7922967, "analysis": {"terror": {"units": "ns", "min": -3.49, "max": 5.84, "range": 9.33, "mean": 0.03, "stddev": 2.342, "variance": 5.486}}}