A Canary Deployment consists of deploying a component and sending part of the traffic to this component. If any failure is detected, the original platform must be set up.
This project demonstrates how to do that dynamically with Camunda 8 without stopping the server. It demonstrates the feature on two different artifacts
- on a service task. Deploy a new version of a service task, and send 20 % of traffic to this new service task
- on a process. Deploy a new version of a process and send 15 % of traffic to this new process.
Both do not need to stop the server or change the application.
This project builds an application. This application contains multiple functions to demonstrate different steps.
A Kubernetes folder (k8) contains all the different Kubernetes files needed to start the component. This is the same image but with a different configuration.
This is why different Kubernetes deployments are present in the k8 folder: they can start one pod running only one function.
This application is a load balancer between version processes. It offers the same API as the REST Zeebe Gateway, but rules can be onboard to load balance traffic between different versions of the same process, per percentage.
The Ruby application simulates a customer application, which creates process instances.
The workers define different workers
Rebuilt the image via
mvn clean install
docker build -t pierre-yves-monnet/canarydeployment:1.0.1 .
docker tag pierre-yves-monnet/canarydeployment:1.0.1 ghcr.io/camunda-community-hub/canarydeployment:1.0.1
docker push ghcr.io/camunda-community-hub/canarydeployment:1.0.1
docker tag pierre-yves-monnet/canarydeployment:1.0.1 ghcr.io/camunda-community-hub/canarydeployment:latest
docker push ghcr.io/camunda-community-hub/canarydeployment:latest
The docker image is built using the Dockerfile present on the root level.
Push the image to
ghcr.io/camunda-community-hub/process-execution-automator:
This section explains how to demonstrate a canary deployment step-by-step
For all scenarios, a Camunda 8.6 platform is up and running. The values.yaml used is
global:
identity:
auth:
enabled: false
identity:
enabled: false
identityPostgresql:
enabled: false
prometheusServiceMonitor:
enabled: true
A Grafana page is started and is accessible.
kubectl get svc -n default
CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 34.118.224.1 <none> 443/TCP 53d
metrics-grafana ClusterIP 34.118.239.223 <none> 80/TCP 53d
metrics-grafana-loadbalancer LoadBalancer 34.118.226.41 34.23.97.79 80:32264/TCP 53d
Check the external IP and start the browser on it (http://34.23.97.79)
Access the procedure README.md
Access the procedure README.md