This repository contains the deployments and descriptions for the DOME-Marketplace. The development instance of the DOME-Marketplace runs on a managed kuberentes, provided by Ionos.
A description of the deployed architecture and a description of the main flows inside the system can be found here. The demo-scenario, show-casing some of the current features can be found under demo
The GitOps approach aims for a maximum of automation and will allow to reproduce the full setup. For more information about GitOps, see: - RedHat Pros and Cons - https://www.redhat.com/architect/gitops-implementation-patterns - ArgoCD - https://argo-cd.readthedocs.io/en/stable/ - FluxCD - https://fluxcd.io/
⚠️ All documentation and tooling uses Linux and is only tested there(more specifically Ubuntu). If another system is used, please find proper replacements that fit your needs.
In order to setup the DOME-Marketplace, its recommended to install the following tools before starting
- ionosctl-cli to interact with the Ionos-APIs
- jq as a json-processor to ease the work with the client outputs
- kubectl for debugging and inspecting the resources in the cluster
- kubeseal for sealing secrets using asymmetric cryptography
⚠️ The cluster is created on the Ionos-Cloud, therefor you should have your account data available.
- In order to create the cluster, login to Ionos:
ionosctl login
- A Datacenter as the logical bracket around the nodes in your cluster has to be created:
export DOME_DATACENTER_ID=$(ionosctl datacenter create --name DOME-Marketplace -o json | jq -r '.items[0].id')
# wait for the datacenter to be "AVAILABLE"
watch ionosctl datacenter get -i $DOME_DATACENTER_ID
- Create the Kubernetes Cluster and wait for it to be "ACTIVE":
export DOME_K8S_CLUSTER_ID=$(ionosctl k8s cluster create --name DOME-Marketplace-K8S -o json | jq -r '.items[0].id')
watch ionosctl k8s cluster get -i $DOME_K8S_CLUSTER_ID
- Create the initial nodepool inside your cluster and datacenter:
export DOME_K8S_DEFAULT_NODEPOOL_ID=$(ionosctl k8s nodepool create --cluster-id $DOME_K8S_CLUSTER_ID --name default-pool --node-count 2 --ram 8192 --storage-size 10 --datacenter-id $DOME_DATACENTER_ID --cpu-family "INTEL_SKYLAKE" -o json | jq -r '.items[0].id')
# wait for the pool to be available
watch ionosctl k8s nodepool get --nodepool-id $DOME_K8S_DEFAULT_NODEPOOL_ID --cluster-id $DOME_K8S_CLUSTER_ID
- Following the recommendations from the Ionos-FAQ, we also dedicate a specific nodepool for the ingress-controller
export DOME_K8S_INGRESS_NODEPOOL_ID=$(ionosctl k8s nodepool create --cluster-id $DOME_K8S_CLUSTER_ID --name default-pool --node-count 1 --datacenter-id $DOME_DATACENTER_ID --cpu-family "INTEL_SKYLAKE" --labels nodepool=ingress -o json | jq -r '.items[0].id')
# wait for the pool to be available
watch ionosctl k8s nodepool get --nodepool-id $DOME_K8S_INGRESS_NODEPOOL_ID --cluster-id $DOME_K8S_CLUSTER_ID
- Retrieve the kubeconfig to access the cluster:
ionosctl k8s kubeconfig get --cluster-id $DOME_K8S_CLUSTER_ID > dome-k8s-config.json
# Exporting the file path to $KUBECONFIG will make it the default config for kubectl.
# If you work with multiple clusters, either extend your existing config or use the file inline with the --kubeconfig flag.
export KUBECONFIG=$(pwd)/dome-k8s-config.json
💡 Even thought the cluster creation was done on Ionos, the following steps apply to all Kubernetes installations(tested version is 1.26.7). Its not required to use Ionos for that.
In order to provide GitOps capabilities, we use ArgoCD. To setup the tool, we need 2 manual steps to deploy ArgoCD, as its decribed by the manual
⚠️ The following steps expect kubectl to be already connected to the cluster, as described in cluster-creationg step 6
- Create a namespace for argocd. For easier configuration, we use argo's default
argocd
kubectl create namespace argocd
- Deploy argocd with extensions enabled:
kubectl apply -k ./extension/ -n argocd
From now on, every deployment should be managed by ArgoCD through Applications.
To properly seperate concerns, the first application
to be deployed will be the namespaces. It will create all namespaces defined in the ionos/namespaces folder.
Apply the application via:
kubectl apply -f applications/namespaces.yaml -n argocd
# wait for it to be SYNCED and Healthy
watch kubectl get applications -n argocd
Using GitOps, means every deployed resource is represented in a git-repository. While this is not a problem for most resources, secrets need to be handled differently. We use the bitnami/sealed-secrets project for that. It uses asymmetric cryptography for creating secrets and only decrypt them inside the cluster. The sealed-secrets controller will be the first application deployed using ArgoCD. Since we want to use the Helm-Charts and keep the values inside our git-repository, we get the problem of ArgoCD only supporting values-files inside the same repository as the chart(as of now, there is an open PR to add that functionality -> PR#8322 ). In order to workaround that shortcomming, we are using "wrapper charts". A wrapper-chart does consist of a Chart.yaml with a dependency to the actual chart. Besides that, we have a values.yaml with our specific overrides. See the sealed-secrets folder as an example.
Apply the application via:
kubectl apply -f applications/sealed-secrets.yaml -n argocd
# wait for it to be SYNCED and Healthy
watch kubectl get applications -n argocd
Once its deployed, secrets can be "sealed" via:
kubeseal -f mysecret.yaml -w mysealedsecret.yaml --controller-namespace sealed-secrets --controller-name sealed-secrets
In order to access applications inside the cluster, an Ingress-Controller is required. We use the NGINX-Ingress-Controller here.
The configuration expects a nodepool labeld with
ingress
in order to safe IP addresses. If you followed cluster-creation such pool already exists.
Apply the application via:
kubectl apply -f applications/ingress.yaml -n argocd
# wait for it to be SYNCED and Healthy
watch kubectl get applications -n argocd
In order to automatically create DNS entries for Ingress-Resources, External-DNS is used.
💡 The
dome-marketplace.org|io|eu|com
domains are currently using nameservers provided by AWS Route53. If you manage the domains somewhere else, follow the recommendations in the External-DNS documentation.
- External-DNS watches the ingress objects and creates A-Records for them. To do so, it needs the ability to access the AWS APIs.
- Create the IAM Policy accoring to the documentation
- Create a user in AWS IAM and assign the policy.
- Create an access key and store it in a file of format:
[default]
aws_secret_access_key = THE_KEY
aws_access_key_id = THE_KEY_ID
4. Base64 encode the file and put it into a secret of the following format:
apiVersion: v1
kind: Secret
metadata:
name: aws-access-key
namespace: infra
data:
credentials: W2RlZmF1bHRdCmF3c19zZWNyZXRfYWNjZXNzX2tleSA9IFRIRV9LRVkKYXdzX2FjY2Vzc19rZXlfaWQgPSBUSEVfS0VZX0lE
5. Seal the secret and commit the sealed secret. :warning: Never put the plain secret into git.
- Apply the application:
kubectl apply -f applications/external-dns.yaml -n argocd
# wait for it to be SYNCED and Healthy
watch kubectl get applications -n argocd
In addition to the dns-entries, we also need valid certificates for the ingress. The certificates will be provided by Lets encrypt. To automate creation and update of the certificates, Cert-Manager is used.
- In order to follow the ACME protocol, Cert-Manager also needs the ability to create proper DNS entries. Therefor we have to provide the AWS account used by External-DNS, too.
- Put key and key-id into the following template:
apiVersion: v1
kind: Secret
metadata:
name: aws-access-key
namespace: cert-manager
stringData:
key: "THE_KEY"
keyId: "THE_KEY_ID"
2. Seal the secret and commit the sealed secret. :warning: Never put the plain secret into git.
- Update the issuer with information about the hosted zone managing your domain and commit it.
- Apply the application:
kubectl apply -f applications/cert-manager.yaml -n argocd
# wait for it to be SYNCED and Healthy
watch kubectl get applications -n argocd
ArgoCD provides a nice GUI and a command-line tool to support the deployments. In order for them to work properly, an ingress and auth-mechanism need to be configured.
Since ArgoCD is already running, we also use it to extend it self, by just providing an ingress-resource pointing towards its server. That way, we will get a proper URL automatically throught the previously configured External-DNS and Cert-Manager.
To seamlessly use ArgoCD, we use Githubs Oauth to manage users for ArgoCd together with those accessing the repo.
- Register ArgoCd in Github, following the documentation
- Put the secret into:
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/part-of: argocd
name: github-secret
namespace: argocd
stringData:
clientSecret: THE_CLIENT_SECRET
- Seal and commit it.
- Configure the organizations to be allowed in the configmap
- Configure user-roles in the rbac-configmap
- Apply the application
kubectl apply -f applications/argocd.yaml -n argocd
# wait for it to be SYNCED and Healthy
watch kubectl get applications -n argocd
Login to our ArgoCD.
In order to deploy a new application, follow the steps:
- Fork the repository and create a new branch.
- (OPTIONAL) Add a new namespace to the namespaces
- Add either the helm-chart(see External-DNS as an example) or the plain manifests(see ArgoCD as an example) to a properly namend folder under /ionos
- Add your application to the /applications folder.
- Create a PR and wait for it to be merged. The application will be automatically deployed afterwards.
For a detailed guide on how to deploy a new application, you can refer to the Integration Guide
To enable teams wishing to integrate their applications into DOME to create the necessary secrets and monitor application resources, they must be granted access to the cluster. To do this, a service account must be created for each team. The provided service account will have write permissions on secrets and sealed secrets, and read-only access to all other Kubernetes resources, limited to the application namespace.
To generate the service account and necessary roles, execute the following script:
Windows PowerShell
.\scripts\GenerateAccount.ps1 -templatePath .\scripts\templates -outputPath .\accounts -namespace <namespace> -server <cluster server url>
Shell
# chmod +x ./scripts/GenerateAccount.sh
./scripts/GenerateAccount.sh ./scripts/templates ./accounts <namespace> <server url>
Once executed, the script will create the resources defined in scripts/templates on the cluster. Additionally, the manifest files of the created resources will be available in the directory accounts/<namespace>
. Specifically, the file at accounts/<namespace>/config/kube-config.yaml
will contain the Kubernetes configuration which must be provided to the team to allow them to connect to the cluster.
In order to reduce the resource-usage and the number of deployments to maintain, the cluster supports Blue-Green Deployments. To integrate seamless with ArgoCD, the extension Argo Rollouts is used. The standard ArgoCD installation is extended via the argo-rollouts.yaml, the configmap-cmd.yaml and the dashboard-extension.yaml(to integrate with the ArgoCD dashboard).
Blue-Green Deployments on the cluster will be done through two mechanisms:
- the rollout-injecting-webhook automatically creates a Rollout for each deployment in enabled workspaces(currently "marketplace", see rollout-webhook deployment) - read the doc for more information
- explicitly creating Rollout-Specs(see the official documentation)
⚠️ Be aware how Blue-Green Rollouts work and there limitations. Since they create a second instance of the application, this is only suitable for stateless-applications(as a Deployment-Resource should be). Stateful-Applications can lead to bad results like deadlocks. If the applications takes care of Datamigrations, configure it to not migrate before Promotion and connect the new revision to another database. To disable the Rollout-Injection, annotate the deployment withwistefan/rollout-injecting-webhook: ignore