-
Navid Shaikh (@swordphilic) [email protected]
-
Swapnil Kulkarni (@coolsvap) [email protected]
-
What is Kubernetes(k8s)?
-
Service for Container Cluster Management
-
Open Sourced by Google
-
Supports GCE, CoreOS, Azure, vSphere, etc
-
Used to manage Docker containers as a default implementation
-
-
Key components in Kubernets
-
Node
-
Pods
-
Master
-
Services
-
Labels
-
Replication Controllers
-
Volumes
-
Secrets
-
K8S Web Interface
-
K8S CLI
-
K8S API
-
Node is a worker machine in Kubernetes, previously known as Minion
. Node
may be a VM or physical machine, depending on the cluster. Each node has
the services necessary to run [Pods](pods.md) and be managed from the master
systems. The services include docker, kubelet and network proxy.
Node Phase is the current lifecycle phase of node, one of Pending
,
Running
and Terminated
.
Unlike [Pods](pods.md) and [Services](services.md), a Node is not inherently created by Kubernetes: it is either created from cloud providers like GCE, or from your physical or virtual machines.
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "10.240.79.157",
"labels": {
"name": "my-first-k8s-node"
}
}
}
Node controller is a component in Kubernetes master which manages Node objects. It performs two major functions: cluster-wide node synchronization and single node life-cycle management.
In Kubernetes, rather than individual application containers, pods are the smallest deployable units that can be created, scheduled, and managed.
A pod (as in a pod of whales or pea pod) corresponds to a colocated group of applications running with a shared context. Within that context, the applications may also have individual cgroup isolations applied. A pod models an application-specific "logical host" in a containerized environment. It may contain one or more applications which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual host.
A Kubernetes Service
is an abstraction which defines a logical set of Pods
and a policy by which to access them - sometimes called a micro-service. The
set of Pods
targeted by a Service
is (usually) determined by a [Label
Selector
](labels.md#label-selectors) (see below for why you might want a Service
without a
selector).
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"selector": {
"app": "MyApp"
},
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 9376
}
]
}
}
Labels are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users, but which do not directly imply semantics to the core system. Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object.
"labels": {
"key1" : "value1",
"key2" : "value2"
}
A replication controller ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Unlike in the case where a user directly created pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a replication controller even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A replication controller delegates local container restarts to some agent on the node (e.g., Kubelet or Docker).
A Volume is a directory, possibly with some data in it, which is accessible to a Container. Kubernetes Volumes are similar to but not the same as Docker Volumes
A process in a Container sees a filesystem view composed from two sources: a single Docker image and zero or more Volumes. A Docker image is at the root of the file hierarchy. Any Volumes are mounted at points on the Docker image; Volumes do not mount on other Volumes and do not have hard links to other Volumes. Each container in the Pod independently specifies where on its image to mount each Volume. This is specified in each container’s VolumeMounts property.
Types of Volumes
Kubernetes currently supports multiple types of Volumes: emptyDir, gcePersistentDisk, awsElasticBlockStore, gitRepo, secret, nfs, iscsi, glusterfs, persistentVolumeClaim, rbd. The community welcomes additional contributions.
Objects of type secret
are intended to hold sensitive information, such as
passwords, OAuth tokens, and ssh keys. Putting this information in a secret
is safer and more flexible than putting it verbatim in a pod
definition or in
a docker image.
A secret can be used with a pod in two ways: either as files in a volume mounted on one or more of its containers, or used by kubelet when pulling images for the pod.
Set up vagrant
$sudo yum -y install vagrant
$sudo service libvirtd start
Set up the vagrant directory
The source directory for the Fedora Atomic vagrant image and its Vagrantfile
$ mkdir -p ~/vagrant/fedora_atomic
$ cd ~/vagrant/fedora_atomic
Get the Fedora Atomic image for vagrant
# Downloading the libvirt/KVM image
$ wget https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Atomic-Vagrant-22-20150521.x86_64.vagrant-libvirt.box
Create Vagrantfile
$ cat > ~/vagrant/fedora_atomic/Vagrantfile <<- EOM
Vagrant.configure(2) do |config|
config.vm.box = "fedora-atomic"
end
EOM
Start the vagrant box
$sudo vagrant up
$sudo vagrant ssh
vagrant ssh should take you inside of the Vagrant box
Look up the kubernetes docker and flannel RPM packages version
$ rpm -q kubernetes docker flannel
kubernetes-0.15.0-8.fc22.x86_64
docker-1.6.0-3.git9d26a07.fc22.x86_64
flannel-0.2.0-7.fc22.x86_64
Start the Kubernetes services
# Need sudo access
$ sudo -i
# Restart and enable services for master
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
# Restart and enable services for node
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
-
Verifying if cluster is setup*
kubectl get nodes
NAME LABELS STATUS
127.0.0.1 <none> Ready
Our node (which is the same vagrant box) is in Ready
state now.
To create a pod we need to define its specs. Write following pod spec in a manifest file nginx_pod.yaml
apiVersion: v1beta3
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
And create pod using above specs
kubectl create -f nginx_pod.yaml
kubectl delete pod nginx
Wordpress is a 2 tier application, database and application. It uses MySQL as its database tier. MySQL and Wordpress images used are the official Docker images. We’ll create a MySQL pod and a Wordpress pod and create a service per pod.
-
Create pod for MySQL
apiVersion: v1beta3
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: yourpassword
ports:
- containerPort: 3306
name: mysql
-
Create service for MySQL
apiVersion: v1beta3
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
ports:
# the port that this service should serve on
- port: 3306
# label keys and values that must match in order to receive traffic for this service
selector:
name: mysql
-
Create pod for Wordpress
apiVersion: v1beta3
kind: Pod
metadata:
name: wordpress
labels:
name: wordpress
spec:
containers:
- image: wordpress
name: wordpress
env:
- name: WORDPRESS_DB_PASSWORD
# must match mysql.yaml password
value: yourpassword
ports:
- containerPort: 80
name: wordpress
-
Create Service for Wordpress
apiVersion: v1beta3
kind: Service
metadata:
labels:
name: wpfrontend
name: wpfrontend
spec:
ports:
# the port that this service should serve on
- port: 80
# label keys and values that must match in order to receive traffic for this service
selector:
name: wordpress
type: LoadBalancer