Workloads within Kubernetes are higher level objects that manage Pods or other higher level objects.
In ALL CASES a Pod Template is included, and acts the base tier of management. Pod Template are used by controllers to create new pods .
You can create a simple pod object with one replica, using either declarative or imperative mode:
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
kubectl run pod --generator=run-pod/v1 --image=nginx init-demo
A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods
Three main fields are required: a number of replicas indicating , and a
replicas
:How many Pods it should be maintainingselector
:selector that specifies how to identify Pods that will be managed by theReplicaSet
(we are talking of selector outside pod Template) . the selectors are usually the same as the selectors within the template.template
: pod template specifying the data of new Pods it should create to meet the number of replicas criteria
Deployments are a declarative method of managing Pods via ReplicaSet
.Deployments manage ReplicaSet
which then manage pods.
Deployments Provide rollback functionality and update control. all those updates are managed by a pod-template-hash
.
pod-template-hash
label ensures that child ReplicaSet
of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet
and using the resulting hash as the label value that is added to the ReplicaSet
selector, Pod template labels, and in any existing Pods that the ReplicaSet
might have .
In addition to ReplicaSet
required fields, two new fields are required for the deployment :
revisionHistoryLimit
: The number of previous iterations of the Deployment to retain.strategy
: Describes the method of updating the Pods based on the type :Recreate
: All existing Pods are killed before the new ones are created also known as Big Bang Deployment.RollingUpdate
: Cycles through updating the Pods according to the parameters:maxSurge
: How many ADDITIONAL replicas we want to spin up while updatingmaxUnavailable
: How many replicas may be UNAVAILABLE during the update process
Example :
Lets say you’re resource constrained and can only support 3 replicas of the pod at any one time We could setmaxSurge
to 0 andmaxUnavailable
to 1. This would cycle through updating 1 at a time without spinning up additional pods.
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod based on same criteria . If node is added, the daemonSet will automatically add a pod if node match certain criteria ( based on node affinity) DaemonSet are usually used in :
- Running a daemon Storage on each node such as
ceph
orglusterd
- running a logs collection daemon on every node, such as
fluentd
orfilebeat
- running a node monitoring daemon on every node, such as
Prometheus Node Exporter
,
DaemonSet deployment are the same as replicaSet:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: rs-example
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
<pod template>
v1.12 and earlier
: DemonSets by passed the default scheduler and managed on their own the create and deletion of the pods . this approch caused multiple confusion :- Inconsistent Pod behavior : no pending state when a pod waits to be scheduled by the deamonSet
- Pod preemption : DeamonSet controller will make scheduling decisions without considering pod priority and preemption
v1.12 and later
: deamonSets uses the default scheduler to manage pods: it uses node affinity to specify where to place a node
note that kube-scheduler has no effect on DeamonSet, pods created by a deamonSet are managed by DeamonSet controller
TODO