Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

installation without olm via helm or manifests #2150

Open
travisghansen opened this issue Sep 20, 2024 · 12 comments
Open

installation without olm via helm or manifests #2150

travisghansen opened this issue Sep 20, 2024 · 12 comments

Comments

@travisghansen
Copy link

I have many clusters I would like to roll the operator out to where olm is undesirable...is there not a set of raw manifests or some way to generate a helm chart that would be compatible with installing the operator?

Thanks!

@tristantarrant
Copy link
Member

Have you looked at https://github.com/infinispan/infinispan-helm-charts ?

@travisghansen
Copy link
Author

Yes, that appears to not be the operator though correct? It’s a single install?

@ryanemerson
Copy link
Contributor

Hi @travisghansen. It's possible to deploy the latest version of the Operator without OLM by logging into your k8s cluster and then calling make deploy. This will also install the cert-manager operator, if it's not already installed, as this is required in order for the Operator's webhooks to work as expected.

@travisghansen
Copy link
Author

Awesome! I will look into that as it seems promising. Is it possible to just generate the yaml/manifests (ensuring no deps like cert-manager are included) without applying them? We manage everything via gitops so running that directly against a cluster isn’t really desirable either.

@travisghansen
Copy link
Author

Digging into the Makefile to see if I can come up with something useful. I think I generally have a grip on running the kustomize commands to generate some files that could be used..it appears to rbac pieces are configured to handle a single namespace, is it not possible to make the operator function in a cluster-wide fashion? or if I fixup the rbac bit and set WATCH_NAMESPACE to "" in the deployment it should work?

@ryanemerson
Copy link
Contributor

if I fixup the rbac bit and set WATCH_NAMESPACE to "" in the deployment it should work?

That should work 👍

You can use a kustomize patch for setting the WATCH_NAMESPACE and rbac should just be a case of converting the role and rolebinding to clusterrole and clusterrolebinding respectively.

Is it possible to just generate the yaml/manifests (ensuring no deps like cert-manager are included) without applying them? We manage everything via gitops so running that directly against a cluster isn’t really desirable either.

You could update the deploy target in the Makefile to remove the kubectl apply -f - part of the configuration and also remove the cert-manager dependency, e.g.:

.PHONY: deploy
deploy: manifests kustomize
	cd config/manager && $(KUSTOMIZE) edit set image operator=$(IMG)
	cd config/default && $(KUSTOMIZE) edit set namespace $(DEPLOYMENT_NAMESPACE)
	$(KUSTOMIZE) build config/default

@travisghansen
Copy link
Author

Thank you for the pointers, I will see what I can put together and report back.

@travisghansen
Copy link
Author

Is there a canonical way to determine which IMG version should be used given a particular tag/checkout of the repo? It appears there isn't an alignment between app versions and operator versions..

@travisghansen
Copy link
Author

It's crude but functional I believe, any feedback about outcomes is appreciated:

#!/bin/bash

# TODO: make cluster-wide optional
# TODO: make certs/issuer optional
# TODO: make `Role` / `ClusterRole` merging more sane with yq or similar

set -x
set -e

SCRIPT_DIR="${PWD}"
TMP="/tmp"
CHART="infinispan-operator"
NAME="${CHART}"
CLONE_URL="https://github.com/infinispan/infinispan-operator.git"
VERSION="2.4.3.Final"
IMG="quay.io/infinispan/operator:${VERSION}"
DEPLOYMENT_NAMESPACE="replace-me-operators-zzzzzzzzzzzzzzzzzzz"

# clean up
rm -rf "${TMP}/${CHART}"

# checkout correct version
git clone "${CLONE_URL}" "${TMP}/${CHART}"
cd "${TMP}/${CHART}"
git checkout "${VERSION}"


mkdir -p _chart/crds
mkdir -p _chart/templates

# fixup rbac for cluster-wide
# this takes all the rules from the `Role`  and appends them to the `ClusterRole`
yq -M -e 'select((.kind == "ClusterRole") and (.metadata.name == "manager-role"))' config/rbac/role.yaml > config/rbac/cluster-wide-role.yaml
yq -M -e 'select((.kind == "Role") and (.metadata.name == "manager-role")).rules' config/rbac/role.yaml | sed 's/^/  /' >> config/rbac/cluster-wide-role.yaml
# this effectively deletes the `Role`
mv config/rbac/cluster-wide-role.yaml config/rbac/role.yaml

# remove the RoleBinding
yq -i -e 'del(select(.kind == "RoleBinding"))' config/rbac/role_binding.yaml

cd config/manager && kustomize edit set image operator=${IMG}
cd -
cd config/default && kustomize edit set namespace ${DEPLOYMENT_NAMESPACE}
cd -

cat << 'EOF' >> config/manager/kustomization.yaml

patches:
  - patch: |-
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: controller-manager
        namespace: system
      spec:
        template:
          spec:
            containers:
            - name: manager
              env:
                - name: WATCH_NAMESPACE
                  $patch: delete
  - patch: |-
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: controller-manager
        namespace: system
      spec:
        template:
          spec:
            containers:
            - name: manager
              env:
                - name: WATCH_NAMESPACE
                  value: ""
EOF


kustomize build config/default > _chart/templates/operator.yaml
# crds are included in the above
# kustomize build config/crd > _chart/crds/crds.yaml

# move into asset dir
cd "${TMP}/${CHART}/_chart"

# remove docs before we create invalid yaml with gotmpl logic below
yq -i -e 'del(select(.kind == "Namespace"))' templates/operator.yaml

# undesirable given we have leader-election and the main app, leader-election
# does not need cluster-wide access so blanket rewrite is less than ideal
#sed -i 's/kind: Role/kind: ClusterRole/g' templates/operator.yaml
#sed -i 's/kind: RoleBinding/kind: ClusterRoleBinding/g' templates/operator.yaml


#sed -i "s/namespace: ${DEPLOYMENT_NAMESPACE}/namespace: {{ .Release.Namespace }}/g" templates/operator.yaml
sed -i "s/${DEPLOYMENT_NAMESPACE}/{{ .Release.Namespace }}/g" templates/operator.yaml


# prepare Chart.yaml
cat << EOF > Chart.yaml
apiVersion: v2
name: ${CHART}
description: A Helm chart for ${NAME}
type: application
version: ${VERSION%%.Final}
appVersion: "${VERSION%%.Final}"
EOF

cd "${SCRIPT_DIR}"
helm package "${TMP}/${CHART}/_chart"

@ryanemerson
Copy link
Contributor

Is there a canonical way to determine which IMG version should be used given a particular tag/checkout of the repo? It appears there isn't an alignment between app versions and operator versions..

Each operator version supports a range of Infinispan server versions, which is why there's no direct alignment between the two. The operator image always has the format quay.io/infinispan/operator:<tag>, so to replicate a particular Operator release you could do something like:

  1. TAG=2.4.3.Final
  2. git checkout $TAG
  3. IMG=quay.io/infinispan/operator:$TAG

You can view all release operator images at https://quay.io/repository/infinispan/operator?tab=tags

@travisghansen
Copy link
Author

Thanks, I found those images later and have the exact logic you have suggested in the script.

The main hacky / tricky thing in the script is I can’t actually simply rename Role to ClusterRole etc as the names collide with the existing Cluster* assets that already exist. So I had to do a sort of crude merging for now which could certainly be improved if I knew yq better..but so far so good. Will test some deployments today but logs are currently clean so I expect that to go well.

@travisghansen
Copy link
Author

Testing has gone well, and at a minimum basic functionality works. I haven’t tested all the different resources but an infinispan and cache resource both seem to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants