Skip to content

Commit

Permalink
Sets EXPOSE_SERVICE_HOST env variable
Browse files Browse the repository at this point in the history
Closes #2164
  • Loading branch information
karesti authored and ryanemerson committed Oct 3, 2024
1 parent 554ea6b commit a0c4dde
Show file tree
Hide file tree
Showing 6 changed files with 130 additions and 88 deletions.
191 changes: 108 additions & 83 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,58 @@
# Infinispan Operator
# ![Infinispan Operator](./infinispanOperator.png)
[![License](https://img.shields.io/github/license/infinispan/infinispan?style=for-the-badge&logo=apache)](https://www.apache.org/licenses/LICENSE-2.0)
[![Project Chat](https://img.shields.io/badge/zulip-join_chat-brightgreen.svg?style=for-the-badge&logo=zulip)](https://infinispan.zulipchat.com/)

This is a Kubernetes operator to manage Infinispan clusters.

The **Infinispan Operator** is the official Kubernetes operator for managing Infinispan clusters.
It automates the deployment, scaling, and lifecycle management of Infinispan instances using Custom Resource Definitions (CRDs).
With the Infinispan Operator, users can easily create, configure, and monitor Infinispan distributed in-memory database in a
Kubernetes environment, ensuring high availability and resilience for their applications.

## System Requirements

* [go 1.21](https://github.com/golang/go) or higher.
* Docker | Podman
* [Golang 1.21](https://github.com/golang/go) or higher.
* [Docker](https://www.docker.com/) or [Podman](https://podman.io/)
* [Operator SDK 1.24.1](https://github.com/operator-framework/operator-sdk/releases/tag/v1.24.1)
* A running Kubernetes cluster

# Usage
* A running [Kubernetes](https://kubernetes.io/) cluster

For details on how to use the operator, please read the [official operator documentation](https://infinispan.org/docs/infinispan-operator/main/operator.html).
## Usage Documentation
For details on **how to use** the operator, please read the **[official Infinispan Operator documentation](https://infinispan.org/docs/infinispan-operator/main/operator.html)**.

# Getting Started
You’ll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for testing, or run against a remote cluster.
**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows).
# Developer Guide
This guide is intended for developers who wish to build and contribute to the Operator.

We utilise [Skaffold](https://skaffold.dev/) to drive CI/CD, so you will need to download the latest binary in order to
follow the steps below:
## Kubernetes
A Kubernetes cluster is necessary to develop and run the Operator. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for
testing, or run against a remote cluster.

## Setup a Kind (Kubernetes) Cluster
**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows).

Create a local kind cluster backed by a local docker repository, with [OLM](https://olm.operatorframework.io/)
### Setting a Local Kubernetes Cluster using Docker and Kind
In the scripts folder, you will find scripts to create a local KIND cluster backed by a local [Docker](https://www.docker.com/)
repository and integrated with the Operator Lifecycle Manager ([OLM](https://olm.operatorframework.io/)).

First run
```sh
./scripts/ci/kind-with-olm.sh
```

and install [cert-manager](https://cert-manager.io) on it:
Then install [cert-manager](https://cert-manager.io) on it:

```sh
make deploy-cert-manager
```

### Podman and Podman Desktop
If you are using Windows or Mac, you can use [Podman](https://podman.io/) and [Podman Desktop](https://podman-desktop.io/)
as alternative tools to create a local Kubernetes cluster for development purposes, allowing you to set up a Kind,
Microshift, or OpenShift Kubernetes cluster.

## Skaffold
We utilise [Skaffold](https://skaffold.dev/) to drive CI/CD, so you will need to download the latest binary in order to
follow the steps below.

## Development
Build the Operator image and deploy to a cluster:
Build the Operator image and deploy it to a cluster:

```sh
skaffold dev
Expand All @@ -47,6 +64,69 @@ See more on [Skaffold Documentation](https://skaffold.dev/docs/).

Changes to the local `**/*.go` files will result in the image being rebuilt and the Operator deployment updated.

## Testing

### Unit Tests
Run all the unit tests by calling:

```sh
make test
```

### Integration Tests

The different categories of integration tests can be executed with the following commands:

- `make infinispan-test`
- `make cache-test`
- `make batch-test`
- `make multinamespace-test`
- `make backuprestore-test`

The target cluster should be specified by exporting or explicitly providing `KUBECONFIG`, e.g. `make infinispan-test KUBECONFIG=/path/to/admin.kubeconfig`.

#### Env Variables
The following variables can be exported or provided as part of the `make *test` call.

| Variable | Purpose |
|-----------------------|------------------------------------------------------------------------------------|
| `TEST_NAME` | Specify a single test to run |
| `TESTING_NAMESPACE` | Specify the namespace/project for running test |
| `RUN_LOCAL_OPERATOR` | Specify whether run operator locally or use the predefined installation |
| `EXPOSE_SERVICE_TYPE` | Specify expose service type. `NodePort \| LoadBalancer \| Route`. |
| `EXPOSE_SERVICE_HOST` | Specify the service host. Useful to pass `localhost`. |
| `PARALLEL_COUNT` | Specify parallel test running count. Default is one, i.e. no parallel tests enabled. |

The following command runs a single integration test called `TestBaseFunctionality`:

```sh
make infinispan-test TEST_NAME=TestBaseFunctionality
````

### Xsite
Cross-Site tests require you to create two k8s Kind clusters or utilize already prepared OKD clusters:
```
$ source scripts/ci/configure-xsite.sh $KIND_VERSION $METALLB_VERSION
```
To test locally in running Kind clusters, run:
```
$ go test -v ./test/e2e/xsite/ -timeout 30m
```
## Arm Support
In order to test on ARM machines locally, it's necessary for a few changes to be made.
1. `scripts/ci/kind.sh` needs to be executed with the following env variables:
```bash
DOCKER_REGISTRY_IMAGE="registry:2"
KINDEST_IMAGE="kindest/node"
KINDEST_NODE_VERSION="v1.28.7" # Must be >= 1.28.x to ensure ARM support works as expected
```

2. When executing `make infinispan-test`, set `TEST_NGINX_IMAGE="nginx"` so a multi-arch nginx container is used.

## Debugging
Build the Operator image with [dlv](https://github.com/go-delve/delve) so that a remote debugger can be attached
to the Operator deployment from your IDE.
Expand Down Expand Up @@ -74,7 +154,7 @@ on the cluster. To build and push the operator images to a remote repository, ad
skaffold run --default-repo <remote_repo>
```

# OLM Bundle
## OLM Bundle
The OLM bundle manifests are created by executing `make bundle VERSION=<latest-version>`.

This will create a `bundle/` dir in your local repository containing the bundle metadata and manifests, as well as a
Expand All @@ -86,17 +166,18 @@ The bundle image can be created and pushed to a repository with:
make bundle-build bundle-push VERSION=<latest-version> IMG=<operator-image> BUNDLE_IMG=<bundle-image>
```

# Operator Version
The next version of the Operator to be released is stored in the `./version.txt` at the root of the project. The content
of this file are used to control the generation of documentation and other resources. This file must be updated after an
Operator release.
## Operator Version
The next version of the Operator to be released is stored in the `./version.txt` file at the root of
the project. The contents of this file are used to control the generation of documentation and
other resources. **Update this file after each Operator release**.

# Add a new Infinispan Operand
## Add a new Infinispan Operand
1. Call the "Add Operand" GitHub Action

# Release
To create an Operator release perform the following:
1. Tag the release `git tag <x.y.z>` and push to GitHub
Follow these steps to release the Infinispan Operator:

1. Tag the release `git tag <x.y.z>` and push to GitHub
2. Create and push the multi-arch image using the created tag via the "Image Release" GitHub Action
3. Remove the old bundle from local `rm -rf bundle`
4. Create OLM bundle `make bundle VERSION=<x.y.z> CHANNELS=stable DEFAULT_CHANNEL=stable IMG=quay.io/infinispan/operator:<x.y.z>.Final`
Expand All @@ -105,63 +186,7 @@ To create an Operator release perform the following:
- https://github.com/redhat-openshift-ecosystem/community-operators-prod
6. Once PR in 5 has been merged and Operator has been released to OperatorHub, update the "replaces" field in `config/manifests/bases/infinispan-operator.clusterserviceversion.yaml`
to `replaces: infinispan-operator.v<x.y.z>`
7. Update the `version.txt` file to the next release version
7. **Update the `version.txt` file to the next release version**
8. Update `scripts/ci/install-catalog-source.sh` `VERSION` field to the next release version
9. Update `scripts/create-olm-catalog.sh` to include the just released version in `BUNDLE_IMGS` and the next release version in the update graph
10. Commit changes with appropriate commit message, e.g "Next Version <x.y.z>"

# Testing

## Unit Tests

`make test`

## Go Integration Tests

The different categories of integration tests can be executed with the following commands:

- `make infinispan-test`
- `make cache-test`
- `make batch-test`
- `make multinamespace-test`
- `make backuprestore-test`

The target cluster should be specified by exporting or explicitly providing `KUBECONFIG`, e.g. `make infinispan-test KUBECONFIG=/path/to/admin.kubeconfig`.

### Env Variables
The following variables can be exported or provided as part of the `make *test` call.

| Variable | Purpose |
|-----------------------|--------------------------------------------------------------------------------------|
| `TEST_NAME` | Specify a single test to run |
| `TESTING_NAMESPACE` | Specify the namespace/project for running test |
| `RUN_LOCAL_OPERATOR` | Specify whether run operator locally or use the predefined installation |
| `EXPOSE_SERVICE_TYPE` | Specify expose service type. `NodePort \| LoadBalancer \| Route`. |
| `PARALLEL_COUNT` | Specify parallel test running count. Default is one, i.e. no parallel tests enabled. |

### Xsite
Cross-Site tests require you to create two k8s Kind clusters or utilize already prepared OKD clusters:
```
$ source scripts/ci/configure-xsite.sh $KIND_VERSION $METALLB_VERSION
```

To test locally in running Kind clusters, run:
```
$ go test -v ./test/e2e/xsite/ -timeout 30m
```

## Arm Support
In order to test on ARM machines locally, it's necessary for a few changes to be made.

1. `scripts/ci/kind.sh` needs to be executed with the following env variables:

```bash
DOCKER_REGISTRY_IMAGE="registry:2"
KINDEST_IMAGE="kindest/node"
KINDEST_NODE_VERSION="v1.28.7" # Must be >= 1.28.x to ensure ARM support works as expected
```

2. When executing `make infinispan-test`, set `TEST_NGINX_IMAGE="nginx"` so a multi-arch nginx container is used.

## Java Integration Tests
TODO
10. Commit changes with appropriate commit message, e.g "Next Version <x.y.z>"
4 changes: 4 additions & 0 deletions api/v1/types_util.go
Original file line number Diff line number Diff line change
Expand Up @@ -305,6 +305,10 @@ func (ispn *Infinispan) IsExposed() bool {
return ispn.Spec.Expose != nil && ispn.Spec.Expose.Type != ""
}

func (ispn *Infinispan) GetExposeHost() string {
return ispn.Spec.Expose.Host
}

func (ispn *Infinispan) GetExposeType() ExposeType {
return ispn.Spec.Expose.Type
}
Expand Down
Binary file added infinispanOperator.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 8 additions & 0 deletions test/e2e/utils/common.go
Original file line number Diff line number Diff line change
Expand Up @@ -347,9 +347,17 @@ func WebServerService(name, namespace string) *corev1.Service {
func ExposeServiceSpec(testKube *TestKubernetes) *ispnv1.ExposeSpec {
return &ispnv1.ExposeSpec{
Type: exposeServiceType(testKube),
Host: exposeServiceHost(),
}
}

func exposeServiceHost() string {
if ExposeServiceHost != "" {
return ExposeServiceHost
}
return ""
}

func exposeServiceType(testKube *TestKubernetes) ispnv1.ExposeType {
switch ispnv1.ExposeType(ExposeServiceType) {
case ispnv1.ExposeTypeNodePort, ispnv1.ExposeTypeLoadBalancer:
Expand Down
6 changes: 3 additions & 3 deletions test/e2e/utils/constants.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@ var (
CleanupInfinispan = strings.ToUpper(constants.GetEnvWithDefault("CLEANUP_INFINISPAN_ON_FINISH", "true"))
SuiteMode, _ = strconv.ParseBool(constants.GetEnvWithDefault("SUITE_MODE", "false"))
ExposeServiceType = constants.GetEnvWithDefault("EXPOSE_SERVICE_TYPE", string(ispnv1.ExposeTypeNodePort))

Infrastructure = os.Getenv("TESTING_INFRASTRUCTURE")
Platform = os.Getenv("TESTING_PLATFORM")
ExposeServiceHost = constants.GetEnvWithDefault("EXPOSE_SERVICE_HOST", "")
Infrastructure = os.Getenv("TESTING_INFRASTRUCTURE")
Platform = os.Getenv("TESTING_PLATFORM")

WebServerName = "external-libs-web-server"
WebServerImageName = constants.GetEnvWithDefault("TEST_NGINX_IMAGE", "quay.io/openshift-scale/nginx")
Expand Down
9 changes: 7 additions & 2 deletions test/e2e/utils/kubernetes.go
Original file line number Diff line number Diff line change
Expand Up @@ -508,8 +508,13 @@ func (k TestKubernetes) WaitForExternalService(ispn *ispnv1.Infinispan, timeout
if len(routeList.Items) > 0 {
switch ispn.GetExposeType() {
case ispnv1.ExposeTypeNodePort:
host, err := k.Kubernetes.GetNodeHost(log, context.TODO())
ExpectNoError(err)
var host string
if ispn.GetExposeHost() != "" {
host = ispn.GetExposeHost()
} else {
host, err = k.Kubernetes.GetNodeHost(log, context.TODO())
ExpectNoError(err)
}
hostAndPort = fmt.Sprintf("%s:%d", host, getNodePort(&routeList.Items[0]))
case ispnv1.ExposeTypeLoadBalancer:
hostAndPort = k.Kubernetes.GetExternalAddress(&routeList.Items[0])
Expand Down

0 comments on commit a0c4dde

Please sign in to comment.