Description: For this lab, the participants will learn how to use Deployments inside Kubernetes, and will practise roll-outs of new versions as well as roll-backs.
Duration: ±45m
At the end of this lab, each participant will be able to roll-out and roll-back a new version of an application with a Deployment object both in imperative and declarative ways.
- ✅ Look at your application using your landing zone, eg: https://sokube-k8s-training.hidora.com/YOURNAMESPACE/.
- ✅ Is it still running ?
- ✅ Can you diagnose what happened ?
kubectl describe pod my-simple-todo-pod
- ✅ Is the application (POD) coming back ?
Our application is still in early development and some areas are still buggy. In particular we know our application is sometimes slowing down unexpectedly until being totally useless. Kubernetes has a built-in mechanism to decide upon POD status (and restart mechanisms) for non responsive applications : liveness probes.
From the documentation:
The kubelet uses liveness probes to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available despite bugs.
In a 🔥 hurry 🔥 and while the issue is investigated, our application developers have introduced a new HTTP REST endpoint (available at /health) in the application to allow for an external probe.
-
✅ Update the image of your POD to sokubedocker/simple-todo:1.1, using the declarative method (remember we used a YAML in LAB-K8S-02). Solution here.
-
✅ Try the new application new REST service with your landing zone's URL, eg: https://sokube-k8s-training.hidora.com/YOURNAMESPACE/health, what is it answering ?
Let's use what we've learned about liveness probes and implement one leveraging this new REST endpoint.
- ✅ Complement the previous POD declaration with the liveness probe specs below and apply it. Solution here. Try your new application.
- The check is done periodically every 5s
- The HTTP url to use is /health on port 8080
- A request taking more than 1s means our application can be considered non-responsive, and must be restarted
- We want to avoid deciding liveness while the application is starting, so we'll give the application a grace period of 5s before starting the probe
Is our liveness probe solving the problem ? The instructor will remotely have your application suddenly slow unexpectedly.
- ✅ Watch your pod:
watch kubectl get pod simple-todo-pod --namespace=tatooine
- ✅ Is it restarting ?
- ✅ Check your application with your landing zone URL, eg: https://sokube-k8s-training.hidora.com/YOURNAMESPACE/. Is it available ?
Deployments are the controlled way to express desired states of your applications and let kubernetes system handle the changes to bring to the actual state.
First, let's cleanup our environment:
- ✅ Kill the previously created POD.
✅ Open https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment and read the example deployment specification.
✅ Create a deployment inside your namespace that will match the following specifications (solution here) :
- name is my-todo-deployment
- 1 replica of a POD that has the label app:simple-todo
- POD's specification remains the same:
- image to use for the POD's container is sokubedocker/simple-todo:1.1
- port is 8080
- labels must have the key:value pair app:simple-todo
- liveness probe must be defined
✅ Use the kubectl create command to create your deployment.
kubectl create -f <filename>
✅ List the deployments in your namespace
kubectl get deployment
Look at the Ready column:
- ✅ How many POD's should the deployment have?
- ✅ How many are ready ?
Go to your landing zone URL, eg: https://sokube-k8s-training.hidora.com/YOURNAMESPACE/.
- ✅ Look at the footer section. What has changed in the hostname ?
Development is going full speed and our team has come up with a much awaited feature: completion for tasks. They've just released version 2.0. We'll use our deployment to perform an upgrade to this version.
-
✅ Modify the previous deployment declaration and update the image to sokubedocker/simple-todo:2.0 (solution here)
-
✅ Check the status of our deployment object:
kubectl rollout status deployment my-app-deployment
- ✅ Check the history of our deployment object (note: our target cluster has limited the number of revisions to 2):
kubectl rollout history deployment my-app-deployment
Go to your landing zone URL, eg: https://sokube-k8s-training.hidora.com/YOURNAMESPACE/, and play a bit with the new tasks completion feature. Imagine that this version contains a major issue and we need to revert to the previous 1.1 version.
- ✅ Perform a deployment rollback:
kubectl rollout undo deployment my-app-deployment
One of the roles of the Deployment object (and the controller that handles it) is to maintain a desired number of PODs (called replicas).
In the meantime our todo application is gaining attention and more and more users start to leverage it. Our application still being affected by the unresponsiveness issue, we got couple of customers who have been noticing out-of-service response by our application (while it was restarting). We want to make sure we have permanently 2 instances of the application running.
- ✅ Modify the replicas field in the deployment YAML to have permanently 2 instances of our application in version 1.1 (solution here)
Look at the footer section and perform couple of browsers refresh...
- ✅ What is changing in the hostname ?
- ✅ Try to modify the todo list, and perform some refreshes. What is the main architectural problem of our solution ? (tip: stateless applications vs stateful applications)
- ✅ Scale down the application to 1 replica with version 2.0 while we revisit the architecture to address the stateless problem of our application