Skip to content

This repository provides the infastructure to create a CNF test partner Pod.

License

Notifications You must be signed in to change notification settings

sebrandon1/certsuite-sample-workload

 
 

Repository files navigation

CertSuite Sample Workload

tests release) red hat openshift license

This repository contains two main sections:

  • test-partner: Partner debug pods definition for use on a k8s CNF Certification cluster. Used to run platform and networking tests.
  • test-target: A trivial example CNF (including a replicaset/deployment, a CRD and an operator), primarily intended to be used to run certsuite test suites on a development machine.

Together, they make up the basic infrastructure required for "testing the tester". The partner debug pod is always required for platform tests and networking tests.

Glossary

  • Pod Under Test (PUT): The Vendor Pod, usually provided by a CNF Partner.
  • Operator Under Test (OT): The Vendor Operator, usually provided by a CNF Partner.
  • Debug Pod (DP): A Pod running a UBI8-based support image deployed as part of a daemon set for accessing node information. DPs is deployed in "cnf-suite" namespace
  • CRD Under Test (CRD): A basic CustomResourceDefinition.

Prerequisites

Namespace

By default, DP are deployed in "default" namespace. all the other deployment files in this repository use tnf as default namespace. A specific namespace can be configured using:

export CERTSUITE_EXAMPLE_NAMESPACE="tnf" #tnf for example

On-demand vs always on debug pods

By default debug pods are installed on demand when the tnf test suite is deployed. To deploy debug pods on all nodes in the cluster, configure the following environment variable:

export ON_DEMAND_DEBUG_PODS=false

Cloning the repository

The repository can be cloned to local machine using:

git clone [email protected]:redhat-best-practices-for-k8s/certsuite-sample-workload.git

Installing the Test-target

Although any CNF Certification results should be generated using a proper CNF Certification cluster, there are times in which using a local emulator can greatly help with test development. As such, test-target provides a simple PUT, OT, CRD, which satisfies the minimal requirements to perform test cases. These can be used in conjunction with a local kind cluster to perform local test development.

Dependencies

In order to run the local test setup, the following dependencies are needed:

Setup with docker and kind

Install the latest docker version ( https://docs.docker.com/engine/install/fedora ):

sudo dnf config-manager \
    --add-repo \
    https://download.docker.com/linux/fedora/docker-ce.repo

sudo dnf remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-selinux \
                  docker-engine-selinux \
                  docker-engine

sudo dnf -y install dnf-plugins-core

sudo dnf install docker-ce docker-ce-cli containerd.io

Perform the post install ( https://docs.docker.com/engine/install/linux-postinstall ):

 sudo systemctl start docker.service
 sudo systemctl enable docker.service
 sudo systemctl enable containerd.service
 sudo groupadd docker
 sudo usermod -aG docker $USER
 newgrp docker 

Configure IPv6 in docker ( https://docs.docker.com/config/daemon/ipv6/ ):

# update docker config
sudo bash -c 'cat <<- EOF > /etc/docker/daemon.json
{
  "ipv6": true,
  "fixed-cidr-v6": "2001:db8:1::/64"
}
EOF'

Enable IPv6 with:

sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0

to persist IPv6 support, edit or add the following lines in the /etc/sysctl.conf file

net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0

disable firewall, if present, as multus interfaces will not be able to communicate.

Note: if docker is already running after running the command below, also restart docker as taking the firewall down will remove the docker rules:

sudo systemctl stop firewalld

restart docker:

sudo systemctl restart docker

Download and install Kubernetes In Docker (Kind):

curl -Lo kind https://github.com/kubernetes-sigs/kind/releases/download/v0.24.0/kind-linux-amd64

Configure a cluster with 4 worker nodes and one master node ( dual stack ):

cat <<- EOF > config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  ipFamily: dual
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF
kind create cluster --config=config.yaml

Increase max files limit to prevent issue due to the large cluster size ( see https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files ):

sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl fs.inotify.max_user_instances=512

To make the changes persistent, edit the file /etc/sysctl.conf and add these lines:

fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512

Deploy both test target and test partner as a local-test-infra

To create the resources, issue the following command:

make install

This will create a PUT named "test" in CERTSUITE_EXAMPLE_NAMESPACE namespace and Debug Daemonset named "debug". The example certsuite_config.yml in certsuite will use this local infrastructure by default.

Note that this command also creates OT and CRD resources.

To verify test pods are running:

oc get pods -n $CERTSUITE_EXAMPLE_NAMESPACE -o wide

You should see something like this (note that the 2 test pods are running on different nodes due to a anti-affinity rule):

$ oc get pods -ntnf -owide
NAME                                                    READY   STATUS    RESTARTS   AGE
hazelcast-platform-controller-manager-6bbc968f9-fmmbs   1/1     Running   0          3m19s
test-0                                                  1/1     Running   0          84m
test-1                                                  1/1     Running   0          83m
test-66f77bd94-2w4l8                                    1/1     Running   0          85m
test-66f77bd94-6kd6j                                    1/1     Running   0          85m

Delete local-test-infra

To tear down the local test infrastructure from the cluster, use the following command. It may take some time to completely stop the PUT, CRD, OT, and DP:

make clean

Setup with Vagrant, docker and kind (Mac OS support)

Install vagrant for your platform:

https://www.vagrantup.com/downloads

To build the environment, including deploying the test cnf, do the following:

make vagrant-build

The kubeconfig for the new environment will override the file located at ~/.kube/config Just start running commands from the command line to test the new cluster:

oc get pods -A

To destroy the vagrant environment, do the following:

make vagrant-destroy

To access the virtual machine supporting the cluster, do the following:

cd config/vagrant
user@fedora vagrant]$ vagrant ssh
[vagrant@k8shost ~]$

The partner repo scripts are located in ~/partner

Setup with podman, qemu, and kind (Mac OS Ventura)

brew install kind podman qemu
kind create cluster
export KIND_EXPERIMENTAL_PROVIDER=podman
git clone [email protected]:redhat-best-practices-for-k8s/certsuite-sample-workload.git &&
  cd certsuite-sample-workload &&
  make rebuild-cluster; make install

License

CertSuite Sample Workload is copyright Red Hat, Inc. and available under an Apache 2 license.

About

This repository provides the infastructure to create a CNF test partner Pod.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Shell 91.5%
  • Makefile 4.7%
  • Dockerfile 2.6%
  • Go 1.2%