Skip to content

Commit

Permalink
[docs] Refer to InterUSS contributing procedure, improve contributing…
Browse files Browse the repository at this point in the history
… doc (#232)

Refer to InterUSS contributing procedure, improve contributing doc
  • Loading branch information
BenjaminPelletier authored Oct 5, 2023
1 parent 0d773a2 commit fd7dc4a
Show file tree
Hide file tree
Showing 3 changed files with 63 additions and 29 deletions.
41 changes: 32 additions & 9 deletions .github/workflows/CI.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,35 +2,58 @@

Before a pull request can be merged into the main branch, it must pass all automated tests for the repository. This document describes the tests and how to run them locally.

To run the equivalent of the full CI suite locally, simply execute `make presubmit`. This should perform all of the steps below, including bringing up a local interoperability ecosystem and all local USS mocks necessary to conduct tests. Therefore, `make presubmit` can (and should) be run starting without any pre-existing local infrastructure (interoperability ecosystem and/or USS mocks).

The downside of `make presubmit` is that it takes a long time, and a lot of that time is spent bringing up and tearing down local infrastructure (interoperability ecosystem and USS mocks). If your changes do not affect one or both of these things, you may save substantial time by bringing up an interoperability ecosystem once and leaving it up while developing, and possibly bringing up a set of USS mocks ([mock_uss](../../monitoring/mock_uss) instances) and leaving them up while developing (though obviously this will not work if developing changes that affect mock_uss). See uss_qualifier tests section below.

## Repository hygiene (`make check-hygiene`)

This set of tests ensures consistent formatting and performs other hygiene-related tasks that are not usually related to the actual functionality of the repository.

### Python lint (`make python-lint`)

To maintain consistency across this large codebase, InterUSS employs a very strict Python linter. But, there is no need to satisfy the linter manually; simply run `make format` from the repo root to resolve most mere-formatting issues that this check will detect.

### Automated hygiene verification (`make hygiene`)

This check performs [miscellaneous hygiene checks](../../test/repo_hygiene/README.md). Currently, it ensures that local links in Markdown (*.md) files are not broken.

### uss_qualifier documentation validation (`make validate-uss-qualifier-docs`)

uss_qualifier [scenario documentation](../../monitoring/uss_qualifier/scenarios/README.md#documentation) is required and has strict requirements to ensure a strong, consistent, and correct interface between regulators and others concerned only with the concept of what scenarios are doing, and InterUSS contributors concerned with writing the actual code to perform the test. This check validates many of these documentation requirements.

### Shell lint (`make shell-lint`)

### Go lint (`make go-lint`)
This repository contains a large number of shell scripts. This check attempts to have these scripts follow best practices and avoid common pitfalls.

## `monitoring` tests (`make check-monitoring`)

These tests verify functional behavior of the repository content.

### monitorlib tests (`make test` in monitoring/monitorlib)

### mock_uss tests (`make test` in monitoring/mock_uss)

Steps:

* Bring up geoawareness mock_uss
* Run geoawareness pytest
This check runs unit tests for mock_uss.

### uss_qualifier tests (`make test` in monitoring/uss_qualifier)

Steps:
Note: `export USS_QUALIFIER_STOP_FAST=true` is recommended as defining that environment variable will cause testing to stop on the first failure, and this is usually the desired behavior when debugging.

This action:

* Brings up a local interoperability ecosystem
* Brings up local USS mocks
* Runs monitoring/uss_qualifier/run_locally.sh
* Tears down local USS mocks
* Tears down local interoperability ecosystem

* test_docker_fully_mocked.sh with following configurations:
1. U-Space (configurations.dev.uspace)
2. ASTM NETRID v19 (configurations.dev.netrid_v19)
When debugging, changes very rarely change the behavior of the local interoperability ecosystem. So, it is usually faster to leave a single local interoperability ecosystem up for the entire debugging session. A local interoperability ecosystem can be brought up with `make start-locally` from the repo root, and torn down with `make down-locally` from the repo root.

When only making changes to uss_qualifier and not making changes that affect mock_uss, it is usually faster to leave the set of USS mocks running throughout a debugging session. The USS mocks can be brought up with `make start-uss-mocks` from the repo root, and torn down with `make stop-uss-mocks`.

uss_qualifier's run_locally.sh ([monitoring/uss_qualifier/run_locally.sh](../../monitoring/uss_qualifier/run_locally.sh)) executes uss_qualifier in docker container connected to the local networks established by the local interoperability ecosystem and local USS mocks. It accepts an optional parameter specifying the test configuration to run. If this parameter is omitted (as it is in the CI test), all tracked configurations will be run (see run_locally.sh for a list). To speed up debugging, run just the configuration producing the error by [specifying it](../../monitoring/uss_qualifier/configurations/README.md#specifying); e.g., `monitoring/uss_qualifier/run_locally.sh configurations.dev.uspace`.

### prober tests (`make test` in monitoring/prober)

[prober](../../monitoring/prober/README.md) is a legacy test suite dedicated to integration testing of DSS instances. It is being migrated to test scenarios and suites in uss_qualifier, but in the mean time, it is still the standard way to validate DSS functionality. It runs on the DSS provided as part of the local interoperability ecosystem.
47 changes: 28 additions & 19 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,33 +2,42 @@

Welcome to this repository and thank you for your interest in contributing to it.

In order to maximize the quality of contributions while keeping the time and energy spent by contributors and committers to a minimum, we kindly ask you to adhere to the practices described below.
Contributions should follow [the general InterUSS contributions process](https://github.com/interuss/tsc/blob/main/repo_contributions.md). Additional information specific to this repository is provided below.

## General principles
1. Any change to resources in this repository must be handled through a [Pull Request (PR)](https://docs.github.com/en/get-started/quickstart/contributing-to-projects).
## Integration tests

1. PRs to the main branch are expected to pass [continuous integration automated tests](.github/workflows/CI.md).
When [a PR is created](https://github.com/interuss/tsc/blob/main/repo_contributions.md#create-draft-pr-in-interuss-repository), the [continuous integration (CI) tests for this repository](./.github/workflows/CI.md) will run, and the PR will generally not be reviewed until they pass (unless [committer help is requested](https://github.com/interuss/tsc/blob/main/repo_contributions.md#request-committer-help-via-comment-in-pr) to address the failure). See [the continuous integration test documentation](./.github/workflows/CI.md) for how to run these tests on your local system more quickly and efficiently to be confident your PR will pass the CI tests when created (or when updates are made).

1. No PR on the main branch can be merged without being reviewed.
### Failing "uss_qualifier tests" CI check

1. The main branch should remain stable at all times. Before a PR is merged into the main branch, it shall pass the tests described in the [continuous integration](./.github/workflows/CI.md). Checks are run automatically on every PR.
If `make presubmit` succeeds on a developer's local machine, the GitHub CI actions should succeed as well. [A known issue](https://github.com/interuss/monitoring/issues/28) frequently causes the "uss_qualifier tests" check to fail. If the failed check indicates a query response code of 999 (this is the code InterUSS indicates when no response is received), this is very likely the problem. A committer can rerun the CI check and it is likely to succeed on the second try with no changes.

1. Before your PR can be accepted, you must submit a Contributor License Agreement (CLA). See [here](https://github.com/interuss/tsc/blob/main/CONTRIBUTING.md#contributor-license-agreement-cla) for more details.
If anyone can resolve [issue #28](https://github.com/interuss/monitoring/issues/28) which causes this problem, that help would be enormously appreciated by InterUSS.

1. Contributions involving Python code should generally follow the [InterUSS Python style guide](https://github.com/interuss/tsc/blob/main/python_style_guide.md).
## uss_qualifier test scenarios

1. The size of individual PRs is a key factor with respect to the quality and efficiency of the reviews. When [Large Contributions](#large-contributions) are required, they should be structured as a series of small and focused PRs. Here are some helpful references:
- [Strategies For Small, Focused Pull Requests, Steve Hicks](https://artsy.github.io/blog/2021/03/09/strategies-for-small-focused-pull-requests/)
- [The Art of Small Pull Requests, David Wilson](https://essenceofcode.com/2019/10/29/the-art-of-small-pull-requests/)
[uss_qualifier](monitoring/uss_qualifier/README.md) is InterUSS's automated testing framework. To contribute new test scenarios or test scenario updates, the following process should generally be followed.

### Large contributions
Given the critical nature of the content of this repository, contributions require careful review. To ensure that large contributions don't have a negative impact on the quality of the reviews, the following steps help ensure your contribution gets merged progressively, maximizing knowledge sharing between contributors and committers.
### Requirements

#### Initial discussion
Get in touch with an [InterUSS committer](https://github.com/interuss/tsc/blob/main/CONTRIBUTING.md#committers) using the [mailing list](https://github.com/interuss/tsc#mailing-list) or [Slack](https://github.com/interuss/tsc#slack) so they know what to expect and can provide early feedback and guidance.
The purpose of uss_qualifier is to validate that a system under test meets a set of requirements. A test scenario is simply a procedure to arrange situations where a system's compliance to requirements can be measured. Before development of a test scenario can begin, the requirements it is intended to verify must be defined and identified. uss_qualifier has [a specific, required way](monitoring/uss_qualifier/requirements/README.md) to document requirements in Markdown files. In the case of requirements originating from copyrighted documents such as ASTM standards, the full content of the requirements should generally not be copied into these requirement definitions, but the existence of each requirement must be explicitly declared even if the explanatory content does not exist in this repository.

#### Design document
Following the initial discussion, the next step will be to create a design document that provides a detailed description of the contribution and outlines clearly how it will integrate and interact with the existing components. The expected content of this design document includes a narrative, overall architecture description, sequence diagrams, and test plans. If applicable, based on the advice of an [InterUSS committer](https://github.com/interuss/tsc/blob/main/CONTRIBUTING.md#committers), this material should be submitted as a PR to the repository in a DESIGN.md file or design directory.
### Implementation plan

#### Work and PRs breakdown
Upon reviewing your design document, an [InterUSS committer](https://github.com/interuss/tsc/blob/main/CONTRIBUTING.md#committers) may require you to break the contribution into smaller packages, allowing small PRs to be reviewed progressively. See this page for an actual [example](https://github.com/interuss/dss/wiki/Updating-SCD-API).
If the test to be implemented will require new functionality outside of uss_qualifier (e.g., in mock_uss or in automated_testing_interfaces), the test is likely a large contribution and should usually follow the portions of the [general contributing procedure](https://github.com/interuss/tsc/blob/main/repo_contributions.md#contributing-procedure) regarding creating an issue and/or discussing the approach.

When developing a test will involve multiple phases (e.g., test scenario X depends on resource Y which will follow automated test interface Z), the phases with no incomplete dependencies should generally be completed before the dependent phases are started (e.g., automated test interface Z should be completed before working on resource Y which should be completed before working on test scenario X).

Phases that can be completed or progressed in a single small PR that does not depend on concepts or tools that are not yet fully implemented do not require the pre-coordination described above.

### Scenario documentation

uss_qualifier test scenario documentation describes what is needed for the scenario ([resources](monitoring/uss_qualifier/resources/README.md)), what will happen in the scenario ([test cases, test steps](monitoring/uss_qualifier/scenarios/README.md#test-scenarios)), and what observations may result in concluding that a system under test fails to meet one or more requirements ([test checks](monitoring/uss_qualifier/scenarios/README.md#test-checks)). Documented test checks are the only times uss_qualifier may indicate that a system under test has failed part or all of the test, and these checks should indicate which [requirements](#requirements) uss_qualifier can conclude that the system under test failed to meet when it fails that check. Documentation for test checks that are not yet implemented in the test scenario Python code should include a `TODO: Implement` note (or similar).

If the test scenario will be implemented in a separate PR from the documentation (usually recommended), the PR containing test scenario documentation should also include an empty (the overridden `run` method contains just a `pass` statement) [test scenario implemented in Python](monitoring/uss_qualifier/scenarios/README.md#structure) -- this will ensure that the test scenario documentation format and content is validated by the CI. See [the noop test scenario](monitoring/uss_qualifier/scenarios/dev/noop.py) for an example of a nearly-empty test scenario.

If the test scenario is long, note that the documentation does not need to be created all in one PR. Small PRs are preferred, and it may make sense to write the documentation for one test case (per PR) at a time, for instance.

### Scenario implementation

Once all necessary prerequisites (e.g., resources) are available and the test scenario documentation is complete, actually writing the code of the test scenario should be fairly straightforward. Before creating a PR, the test scenario code must at least have been successfully tested by the developer. Ideally, a test configuration included in the CI (see list of configurations tested in [uss_qualifier's run_locally.sh](monitoring/uss_qualifier/run_locally.sh)) should run the test scenario. If the test scenario is not run as part of the CI, the PR author must clearly indicate why they are sure the test scenario has been implemented correctly.
4 changes: 3 additions & 1 deletion monitoring/uss_qualifier/scenarios/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,8 @@ Test scenarios will usually be enumerated/identified/created by mapping a list o
While unmapped requirements still exist:
Create new, simple test scenario that verifies a set of remaining unmapped requirements.

See [CONTRIBUTING.md](../../../CONTRIBUTING.md#ussqualifier-test-scenarios) for more information on how to develop test scenarios.

## Resources

Most test scenarios will require [test resources](../resources/README.md) (like NetRID telemetry to inject, NetRID service providers under test, etc) usually customized to the ecosystem in which the tests are being performed. A test scenario declares what kind of resource(s) it requires, and a test suite identifies which available resources should be used to fulfill each test scenario's needs.
Expand Down Expand Up @@ -91,7 +93,7 @@ If the text of this section includes `TODO:`, then the check will be indicated a

### Cleanup phase

If a test scenario wants to perform a cleanup procedure follow any non-error termination of the rest of the scenario, it must:
If a test scenario wants to perform a cleanup procedure following any non-error termination of the rest of the scenario, it must:

1) Override the `cleanup()` method on the base `TestScenario` class
2) Include a main section in the documentation named "Cleanup" that is documented like a test step (including, e.g., test checks when appropriate).
Expand Down

0 comments on commit fd7dc4a

Please sign in to comment.