Skip to content

Latest commit

 

History

History
259 lines (178 loc) · 9.64 KB

README.md

File metadata and controls

259 lines (178 loc) · 9.64 KB

Eclipse BlueChi™ integration tests

Installation

The integration tests use the RESTful API of podman to isolate BlueChi and the agents on multiple, containerized nodes. Therefore, a working installation of podman is required. Please refer to podman installation instructions.

Installing packages using RPM

First, enable required repositories on CentOS Stream 9:

sudo dnf install -y dnf-plugin-config-manager
sudo dnf config-manager -y --set-enabled crb
sudo dnf install -y epel-release

Then install the required packages:

dnf install \
    black \
    createrepo_c \
    podman \
    python3-isort \
    python3-flake8 \
    python3-paramiko \
    python3-podman \
    python3-pytest \
    python3-pyyaml \
    tmt \
    tmt-report-junit \
    -y

NOTE: Integration tests code should be compatible with Python 3.9, please don't use features from newer versions.

Installing packages using pip

All required python packages are listed in the requirements.txt and can be installed using pip:

pip install -U -r requirements.txt

Instead of installing the required packages directly, it is recommended to create a virtual environment. For example, the following snippet uses the built-in venv:

python -m venv ~/bluechi-env
source ~/bluechi-env/bin/activate
pip install -U -r requirements.txt
# ...

# exit the virtual env
deactivate

Configure podman socket access for users

Testing infrastructure uses socket access to podman, so it needs to be enabled:

systemctl --user enable podman.socket
systemctl --user start podman.socket

Running integration tests

Integration tests are executed with tmt framework.

To run integration tests please execute below command in the tests directory:

tmt run -v plan --name container

This will use latest BlueChi packages from bluechi-snapshot repository.

Note: The integration tests can be run in two modes - container and multi-host. For local execution it is advised to select container mode (hence the plan --name container).

Running integration tests with memory leak detection

To run integration tests with valgrind, set WITH_VALGRIND environment variable as follows:

tmt run -v -eWITH_VALGRIND=1 plan --name container

If valgrind detects a memory leak in a test, the test will fail, and the logs will be found in the test data directory.

Running integration tests with local BlueChi build

In order to run integration tests for your local BlueChi build, you need have BlueChi RPM packages built from your source code. The details about BlueChi development can be found at README.developer.md, the most important part for running integration tests is Packaging section.

In the following steps BlueChi source codes are located in ~/bluechi directory.

The integration tests expect that local BlueChi RPMs are located in tests/bluechi-rpms top level subdirectory. In addition, since the tests run in CentOS-Stream9 based containers the RPMs must also be built for CentOS-Stream9. To this end, a containerized build infrastructure is available.

The containerized build infrastructure depends on skipper, installed via the requirements.txt file

cd ~/bluechi
skipper make rpm

When done it's required to create DNF repository from those RPMs:

createrepo_c ~/bluechi/tests/bluechi-rpms

After that step integration tests can be executed using following command:

cd ~/bluechi/tests
tmt run -v -eCONTAINER_USED=integration-test-local plan --name container

Creating code coverage report from integration tests execution

To be able to produce code coverage report from integration tests execution you need to build BlueChi RPMs with code coverage support:

cd ~/bluechi
skipper make rpm WITH_COVERAGE=1
createrepo_c ~/bluechi/tests/bluechi-rpms

When done, you need to run integration tests with code coverage report enabled:

tmt run -v -eCONTAINER_USED=integration-test-local -eWITH_COVERAGE=1 plan --name container

After the integration tests finishes, the code coverage html result can be found in res subdirectory inside the tmt execution result directory, for example:

/var/tmp/tmt/run-001/plans/tier0/report/default-0/report

Developing integration tests

Code Style

Several tools are used in the project to validate code style:

  • flake8 is used to enforce a unified code style.
  • isort is used to enforce import ordering
  • black is used to enforce code formatting

All source files formatting can be checked/fixed using following commands executed from the top level directory of the project:

flake8 tests
isort tests
black tests

Changing log level

By default BlueChi integration tests are using INFO log level to display important information about the test run. More detailed information can be displayed by setting log level to DEBUG:

cd ~/bluechi/tests
tmt run -v -eLOG_LEVEL=DEBUG plan --name container

Using python bindings in tests

The python bindings can be used in the integration tests to simplify writing them. However, it is not possible to use the bindings directly in the tests since they are leveraging the D-Bus API of BlueChi provided on the system D-Bus. A separate script has to be written, injected and executed within the container running the BlueChi controller. In order to keep the usage rather simple, the BluechiControllerContainer class provides a function to abstract these details:

# run the file ./python/monitor.py located in the current test directory
# and get the exit code as well as the output (e.g. all print())
exit_code, output = ctrl.run_python("python/monitor.py")

A full example of how to use the python bindings can be found in the monitor open-close test.

Generating test ID

Every test should be distinctly identified with a unique ID. Therefore, when adding a new test, please execute the following command to assign an ID to the new test:

$ cd ~/bluechi/tests
$ tmt test id .
New id 'UUID' added to test '/tests/path_to_your_new_test'.
...

Checking for duplicate test IDs and summaries

In addition to having a unique ID, the summaries of tests should be descriptive and unique as well. The CI will perform appropriate linting. This can also be invoked locally:

$ cd ~/bluechi/tests

# requires tmt >= 1.35
$ tmt lint tests
Lint checks on all
fail G001 duplicate id "96aa0e17-5e23-4cc3-bc34-88368b8cc07b" in "/tests/tier0/bluechi-agent-connect-via-controller-address"
fail G001 duplicate id "96aa0e17-5e23-4cc3-bc34-88368b8cc07b" in "/tests/tier0/bluechi-agent-get-logtarget"

Usage of containers

The integration tests rely on containers as separate compute entities. These containers are used to simulate BlueChi's functional behavior on a single runner.

Both, integration-test-local as well as integration-test-snapshot, are based on the integration-base image which contains core dependencies such as systemd and devel packages. The base image is published to https://quay.io/repository/bluechi/integration-test-base.

Updating container images in registry

The base images can either be build and pushed locally or via a github workflow to the bluechi on quay.io organization and its repositories. If any updates are required, please reach out to the code owners.

Building and pushing via workflow

The base images build-base and integration-test-base can be built and pushed to quay by using the Container Image Workflow. It can be found and triggered here in the Actions tab of the BlueChi repo.

Building and pushing locally

The base images build-base and integration-test-base are built for multiple architectures (arm64 and amd64) using the build-containers.sh script. It'll build the images for the supported architectures as well as a manifest, which can then be pushed to the registry.

Building for multiple architectures, the following packages are required:

sudo dnf install -y podman buildah qemu-user-static

From the root directory of the project run the following commands:

# In order to build and directly push, login first
buildah login -u="someuser" -p="topsecret" quay.io
PUSH_MANIFEST=yes ./build-scripts/build-push-containers.sh build-base

# Only build locally
./build-scripts/build-push-containers.sh build-base

If you need to build only specific architecture for your local usage, you can specify it as the 2nd parameter:

./build-scripts/build-push-containers.sh build-base amd64