Skip to content

Latest commit

 

History

History
233 lines (171 loc) · 13.7 KB

README_PRD.md

File metadata and controls

233 lines (171 loc) · 13.7 KB

DOME-Marketplace GitOps

This repository contains the deployments and descriptions for the DOME-Marketplace. The development instance of the DOME-Marketplace runs on a managed kuberentes, provided by Ionos.

A description of the deployed architecture and a description of the main flows inside the system can be found here. The demo-scenario, show-casing some of the current features can be found under demo

Setup

The GitOps approach aims for a maximum of automation and will allow to reproduce the full setup. For more information about GitOps, see: - RedHat Pros and Cons - https://www.redhat.com/architect/gitops-implementation-patterns - ArgoCD - https://argo-cd.readthedocs.io/en/stable/ - FluxCD - https://fluxcd.io/

Preparation

⚠️ All documentation and tooling uses Linux and is only tested there(more specifically Ubuntu). If another system is used, please find proper replacements that fit your needs.

In order to setup the DOME-Marketplace, its recommended to install the following tools before starting

  • ionosctl-cli to interact with the Ionos-APIs
  • jq as a json-processor to ease the work with the client outputs
  • kubectl for debugging and inspecting the resources in the cluster
  • kubeseal for sealing secrets using asymmetric cryptography

Cluster creation

⚠️ The cluster is created on the Ionos-Cloud, therefor you should have your account data available.

  1. In order to create the cluster, login to Ionos:
    ionosctl login 
  1. A Datacenter as the logical bracket around the nodes in your cluster has to be created:
    export DOME_DATACENTER_ID=$(ionosctl datacenter create --name DOME-Production -o json | jq -r '.items[0].id')
    # wait for the datacenter to be "AVAILABLE"
    watch ionosctl datacenter get -i $DOME_DATACENTER_ID
    export DOME_DATACENTER_ID=db088b6a-4c61-4a47-8fe9-a4f1c5f916cc
  1. Create the Kubernetes Cluster and wait for it to be "ACTIVE":
    export DOME_K8S_CLUSTER_ID=$(ionosctl k8s cluster create --name DOME-Production-K8S -o json | jq -r '.items[0].id')
    watch ionosctl k8s cluster get -i $DOME_K8S_CLUSTER_ID
    export DOME_K8S_CLUSTER_ID=5821f7a6-e4ee-48c8-9404-741fc0ae281d
  1. Create the initial nodepool inside your cluster and datacenter:
    export DOME_K8S_DEFAULT_NODEPOOL_ID=$(ionosctl k8s nodepool create --cluster-id $DOME_K8S_CLUSTER_ID --name default-pool --node-count 2 --ram 32768 --storage-size 40 --datacenter-id $DOME_DATACENTER_ID --cpu-family "INTEL_SKYLAKE"  -o json | jq -r '.items[0].id')
    # wait for the pool to be available
    watch ionosctl k8s nodepool get --nodepool-id $DOME_K8S_DEFAULT_NODEPOOL_ID --cluster-id $DOME_K8S_CLUSTER_ID
    export DOME_K8S_DEFAULT_NODEPOOL_ID=24ca1894-6b36-4e3f-9332-4bb00c3b9891 
  1. Following the recommendations from the Ionos-FAQ, we also dedicate a specific nodepool for the ingress-controller
    export DOME_K8S_INGRESS_NODEPOOL_ID=$(ionosctl k8s nodepool create --cluster-id $DOME_K8S_CLUSTER_ID --name ingress-pool --node-count 1 --datacenter-id $DOME_DATACENTER_ID --cpu-family "INTEL_SKYLAKE" --labels nodepool=ingress -o json | jq -r '.items[0].id')
    # wait for the pool to be available
    watch ionosctl k8s nodepool get --nodepool-id $DOME_K8S_INGRESS_NODEPOOL_ID --cluster-id $DOME_K8S_CLUSTER_ID
    export DOME_K8S_INGRESS_NODEPOOL_ID=8668accc-b521-4c57-a283-33907af85726
  1. Retrieve the kubeconfig to access the cluster:
    ionosctl k8s kubeconfig get --cluster-id $DOME_K8S_CLUSTER_ID > dome-k8s-config.json
    # Exporting the file path to $KUBECONFIG will make it the default config for kubectl. 
    # If you work with multiple clusters, either extend your existing config or use the file inline with the --kubeconfig flag.
    export KUBECONFIG=$(pwd)/dome-k8s-config.json

Gitops Setup

💡 Even thought the cluster creation was done on Ionos, the following steps apply to all Kubernetes installations(tested version is 1.26.7). Its not required to use Ionos for that.

In order to provide GitOps capabilities, we use ArgoCD. To setup the tool, we need 2 manual steps to deploy ArgoCD, as its decribed by the manual

⚠️ The following steps expect kubectl to be already connected to the cluster, as described in cluster-creationg step 6

  1. Create a namespace for argocd. For easier configuration, we use argo's default argocd
    kubectl create namespace argocd
  1. Deploy argocd with extensions enabled:
    kubectl apply -k ./extension/ -n argocd

From now on, every deployment should be managed by ArgoCD through Applications.

Namespaces

To properly seperate concerns, the first application to be deployed will be the namespaces. It will create all namespaces defined in the ionos/namespaces folder.

Apply the application via:

    kubectl apply -f applications/namespaces.yaml -n argocd

Sealed Secrets

Using GitOps, means every deployed resource is represented in a git-repository. While this is not a problem for most resources, secrets need to be handled differently. We use the bitnami/sealed-secrets project for that. It uses asymmetric cryptography for creating secrets and only decrypt them inside the cluster. The sealed-secrets controller will be the first application deployed using ArgoCD. Since we want to use the Helm-Charts and keep the values inside our git-repository, we get the problem of ArgoCD only supporting values-files inside the same repository as the chart(as of now, there is an open PR to add that functionality -> PR#8322 ). In order to workaround that shortcomming, we are using "wrapper charts". A wrapper-chart does consist of a Chart.yaml with a dependency to the actual chart. Besides that, we have a values.yaml with our specific overrides. See the sealed-secrets folder as an example.

Apply the application via:

    kubectl apply -f applications/sealed-secrets.yaml -n argocd
    # wait for it to be SYNCED and Healthy
    watch kubectl get applications -n argocd

Once its deployed, secrets can be "sealed" via:

    kubeseal -f mysecret.yaml -w mysealedsecret.yaml --controller-namespace sealed-secrets  --controller-name sealed-secrets

Ingress controller

In order to access applications inside the cluster, an Ingress-Controller is required. We use the NGINX-Ingress-Controller here.

The configuration expects a nodepool labeld with ingress in order to safe IP addresses. If you followed cluster-creation such pool already exists.

Apply the application via:

    kubectl apply -f applications/ingress.yaml -n argocd
    # wait for it to be SYNCED and Healthy
    watch kubectl get applications -n argocd

External-DNS

In order to automatically create DNS entries for Ingress-Resources, External-DNS is used.

💡 The dome-marketplace.org|io|eu|com domains are currently using nameservers provided by AWS Route53. If you manage the domains somewhere else, follow the recommendations in the External-DNS documentation.

External-DNS watches the ingress objects and creates A-Records for them. To do so, it needs the ability to access the AWS APIs.

Execute instructions from file /docs/EXTERNAL_DNS_IONOS.md.md

Cert-Manager

In addition to the dns-entries, we also need valid certificates for the ingress. The certificates will be provided by Lets encrypt. To automate creation and update of the certificates, Cert-Manager is used.

  1. In order to follow the ACME protocol, Cert-Manager also needs the ability to create proper DNS entries. Therefor we have to provide the AWS account used by External-DNS, too.
    1. Put key and key-id into the following template:
    apiVersion: v1
    kind: Secret
    metadata:
        name: aws-access-key
        namespace: cert-manager
    stringData:
        key: "THE_KEY"
        keyId: "THE_KEY_ID"
2. Seal the secret and commit the sealed secret. :warning: Never put the plain secret into git.
  1. Update the issuer with information about the hosted zone managing your domain and commit it.
  2. Apply the application:
    kubectl apply -f applications/cert-manager.yaml -n argocd
    # wait for it to be SYNCED and Healthy
    watch kubectl get applications -n argocd

Update ArgoCD

ArgoCD provides a nice GUI and a command-line tool to support the deployments. In order for them to work properly, an ingress and auth-mechanism need to be configured.

Ingress

Since ArgoCD is already running, we also use it to extend it self, by just providing an ingress-resource pointing towards its server. That way, we will get a proper URL automatically throught the previously configured External-DNS and Cert-Manager.

Auth

To seamlessly use ArgoCD, we use Githubs Oauth to manage users for ArgoCd together with those accessing the repo.

  1. Register ArgoCd in Github, following the documentation
  2. Put the secret into:
    apiVersion: v1
    kind: Secret
    metadata:
    labels:
        app.kubernetes.io/part-of: argocd
    name: github-secret
    namespace: argocd
    stringData: 
    clientSecret: THE_CLIENT_SECRET
  1. Seal and commit it.
  2. Configure the organizations to be allowed in the configmap
  3. Configure user-roles in the rbac-configmap
  4. Apply the application
    kubectl apply -f applications/argocd.yaml -n argocd
    # wait for it to be SYNCED and Healthy
    watch kubectl get applications -n argocd

Login to our ArgoCD.

Deploy a new application

In order to deploy a new application, follow the steps:

  1. Fork the repository and create a new branch.
  2. (OPTIONAL) Add a new namespace to the namespaces
  3. Add either the helm-chart(see External-DNS as an example) or the plain manifests(see ArgoCD as an example) to a properly namend folder under /ionos
  4. Add your application to the /applications folder.
  5. Create a PR and wait for it to be merged. The application will be automatically deployed afterwards.

Blue-Green Deployments

In order to reduce the resource-usage and the number of deployments to maintain, the cluster supports Blue-Green Deployments. To integrate seamless with ArgoCD, the extension Argo Rollouts is used. The standard ArgoCD installation is extended via the argo-rollouts.yaml, the configmap-cmd.yaml and the dashboard-extension.yaml(to integrate with the ArgoCD dashboard).

Blue-Green Deployments on the cluster will be done through two mechanisms:

⚠️ Be aware how Blue-Green Rollouts work and there limitations. Since they create a second instance of the application, this is only suitable for stateless-applications(as a Deployment-Resource should be). Stateful-Applications can lead to bad results like deadlocks. If the applications takes care of Datamigrations, configure it to not migrate before Promotion and connect the new revision to another database. To disable the Rollout-Injection, annotate the deployment with wistefan/rollout-injecting-webhook: ignore