Upgrading Kubernetes cluster on AWS is easy with kops. In this exercise we will demonstrate upgrading Kubernetes cluster using two methods.
Upgrading cluster using in-place is easy with kops. This exercise will setup a Kubernetes cluster with 1.6.10 version and perform an automatic rolling upgrade to 1.7.2 using kops.
Review steps to create prereqs and make sure KOPS_STATE_STORE
and AWS_AVAILABILITY_ZONES
environment variables are set. More details about creating a cluster are at Create Kubernetes cluster using kops.
In this chapter, we’ll create a Kubernetes 1.6.10 version cluster as shown:
kops create cluster \ --name example.cluster.k8s.local \ --master-count 3 \ --master-zones ${AWS_AVAILABILITY_ZONES} \ --node-count 5 \ --zones ${AWS_AVAILABILITY_ZONES} \ --kubernetes-version=1.6.10 \ --yes
A multi-AZ deployed cluster ensures that your pods and services don’t see any downtime during cluster upgrades.
Validate the cluster:
$ kops validate cluster --name example.cluster.k8s.local
Validating cluster example.cluster.k8s.local
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-central-1a Master m3.medium 1 1 eu-central-1a
master-eu-central-1b Master m3.medium 1 1 eu-central-1b
master-eu-central-1c Master c4.large 1 1 eu-central-1c
nodes Node t2.medium 5 5 eu-central-1a,eu-central-1b,eu-central-1c
NODE STATUS
NAME ROLE READY
ip-172-20-112-170.eu-central-1.compute.internal master True
ip-172-20-117-204.eu-central-1.compute.internal node True
ip-172-20-54-176.eu-central-1.compute.internal master True
ip-172-20-55-115.eu-central-1.compute.internal node True
ip-172-20-63-241.eu-central-1.compute.internal node True
ip-172-20-71-25.eu-central-1.compute.internal master True
ip-172-20-91-30.eu-central-1.compute.internal node True
ip-172-20-93-220.eu-central-1.compute.internal node True
Your cluster example.cluster.k8s.local is ready
Check the version of different nodes in the cluster:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-112-170.eu-central-1.compute.internal Ready master 7m v1.6.10
ip-172-20-117-204.eu-central-1.compute.internal Ready node 6m v1.6.10
ip-172-20-54-176.eu-central-1.compute.internal Ready master 7m v1.6.10
ip-172-20-55-115.eu-central-1.compute.internal Ready node 6m v1.6.10
ip-172-20-63-241.eu-central-1.compute.internal Ready node 6m v1.6.10
ip-172-20-71-25.eu-central-1.compute.internal Ready master 7m v1.6.10
ip-172-20-91-30.eu-central-1.compute.internal Ready node 6m v1.6.10
ip-172-20-93-220.eu-central-1.compute.internal Ready node 6m v1.6.10
This shows that each node in the cluster is version 1.6.10.
Let’s upgrade the existing cluster of 1.6.10 to 1.7.4. Edit the cluster configuration:
kops edit cluster example.cluster.k8s.local
This will open up the cluster configuration in a text editor. Set kubernetesVersion
key to the target version, 1.7.4 in our case and save the configuration file. The previous value of:
kubernetesVersion: 1.6.10
will be changed to:
kubernetesVersion: 1.7.4
Save the changes and exit the editor. Kubernetes cluster needs to re-read the configuration. This can be done by forcing a rolling update of the cluster using the following command:
Note
|
This process can easily take 30-45 minutes. Its recommended to leave the cluster without any updates during that time. |
$ kops update cluster example.cluster.k8s.local
I1028 18:18:59.671912 10844 apply_cluster.go:420] Gossip DNS: skipping DNS validation
I1028 18:18:59.699729 10844 executor.go:91] Tasks: 0 done / 81 total; 36 can run
I1028 18:19:00.824806 10844 executor.go:91] Tasks: 36 done / 81 total; 15 can run
I1028 18:19:01.601875 10844 executor.go:91] Tasks: 51 done / 81 total; 22 can run
I1028 18:19:03.340103 10844 executor.go:91] Tasks: 73 done / 81 total; 5 can run
I1028 18:19:04.153174 10844 executor.go:91] Tasks: 78 done / 81 total; 3 can run
I1028 18:19:04.575327 10844 executor.go:91] Tasks: 81 done / 81 total; 0 can run
Will modify resources:
LaunchConfiguration/master-eu-central-1a.masters.cluster.k8s.local
UserData
...
cat > kube_env.yaml << __EOF_KUBE_ENV
Assets:
+ - 7bf3fda43bb8d0a55622ca68dcbfaf3cc7f2dddc@https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubelet
- - a9258e4d2c7d7ed48a7bf2e3c77a798fa51a6787@https://storage.googleapis.com/kubernetes-release/release/v1.6.10/bin/linux/amd64/kubelet
+ - 819010a7a028b165f5e6df37b1bb7713ff6d070f@https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl
- - 0afe23fb48276ad8c6385430962cd237367b22f7@https://storage.googleapis.com/kubernetes-release/release/v1.6.10/bin/linux/amd64/kubectl
- 1d9788b0f5420e1a219aad2cb8681823fc515e7c@https://storage.googleapis.com/kubernetes-release/network-plugins/cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz
- c18ca557507c662e3a072c3475da9bd1bc8a503b@https://kubeupv2.s3.amazonaws.com/kops/1.7.1/linux/amd64/utils.tar.gz
...
LaunchConfiguration/master-eu-central-1b.masters.cluster.k8s.local
UserData
...
cat > kube_env.yaml << __EOF_KUBE_ENV
Assets:
+ - 7bf3fda43bb8d0a55622ca68dcbfaf3cc7f2dddc@https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubelet
- - a9258e4d2c7d7ed48a7bf2e3c77a798fa51a6787@https://storage.googleapis.com/kubernetes-release/release/v1.6.10/bin/linux/amd64/kubelet
+ - 819010a7a028b165f5e6df37b1bb7713ff6d070f@https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl
- - 0afe23fb48276ad8c6385430962cd237367b22f7@https://storage.googleapis.com/kubernetes-release/release/v1.6.10/bin/linux/amd64/kubectl
- 1d9788b0f5420e1a219aad2cb8681823fc515e7c@https://storage.googleapis.com/kubernetes-release/network-plugins/cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz
- c18ca557507c662e3a072c3475da9bd1bc8a503b@https://kubeupv2.s3.amazonaws.com/kops/1.7.1/linux/amd64/utils.tar.gz
...
LaunchConfiguration/master-eu-central-1c.masters.cluster.k8s.local
UserData
...
cat > kube_env.yaml << __EOF_KUBE_ENV
Assets:
+ - 7bf3fda43bb8d0a55622ca68dcbfaf3cc7f2dddc@https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubelet
- - a9258e4d2c7d7ed48a7bf2e3c77a798fa51a6787@https://storage.googleapis.com/kubernetes-release/release/v1.6.10/bin/linux/amd64/kubelet
+ - 819010a7a028b165f5e6df37b1bb7713ff6d070f@https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl
- - 0afe23fb48276ad8c6385430962cd237367b22f7@https://storage.googleapis.com/kubernetes-release/release/v1.6.10/bin/linux/amd64/kubectl
- 1d9788b0f5420e1a219aad2cb8681823fc515e7c@https://storage.googleapis.com/kubernetes-release/network-plugins/cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz
- c18ca557507c662e3a072c3475da9bd1bc8a503b@https://kubeupv2.s3.amazonaws.com/kops/1.7.1/linux/amd64/utils.tar.gz
...
LaunchConfiguration/nodes.cluster.k8s.local
UserData
...
cat > kube_env.yaml << __EOF_KUBE_ENV
Assets:
+ - 7bf3fda43bb8d0a55622ca68dcbfaf3cc7f2dddc@https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubelet
- - a9258e4d2c7d7ed48a7bf2e3c77a798fa51a6787@https://storage.googleapis.com/kubernetes-release/release/v1.6.10/bin/linux/amd64/kubelet
+ - 819010a7a028b165f5e6df37b1bb7713ff6d070f@https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl
- - 0afe23fb48276ad8c6385430962cd237367b22f7@https://storage.googleapis.com/kubernetes-release/release/v1.6.10/bin/linux/amd64/kubectl
- 1d9788b0f5420e1a219aad2cb8681823fc515e7c@https://storage.googleapis.com/kubernetes-release/network-plugins/cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz
- c18ca557507c662e3a072c3475da9bd1bc8a503b@https://kubeupv2.s3.amazonaws.com/kops/1.7.1/linux/amd64/utils.tar.gz
...
LoadBalancer/api.cluster.k8s.local
Lifecycle <nil> -> Sync
LoadBalancerAttachment/api-master-eu-central-1a
Lifecycle <nil> -> Sync
LoadBalancerAttachment/api-master-eu-central-1b
Lifecycle <nil> -> Sync
LoadBalancerAttachment/api-master-eu-central-1c
Lifecycle <nil> -> Sync
Must specify --yes to apply changes
Apply changes using the command:
kops update cluster example.cluster.k8s.local --yes
It shows the output:
$ kops update cluster example.cluster.k8s.local --yes
I1028 18:22:53.558475 10876 apply_cluster.go:420] Gossip DNS: skipping DNS validation
I1028 18:22:54.487232 10876 executor.go:91] Tasks: 0 done / 81 total; 36 can run
I1028 18:22:55.750674 10876 executor.go:91] Tasks: 36 done / 81 total; 15 can run
I1028 18:22:56.640322 10876 executor.go:91] Tasks: 51 done / 81 total; 22 can run
I1028 18:22:59.756888 10876 executor.go:91] Tasks: 73 done / 81 total; 5 can run
I1028 18:23:01.154703 10876 executor.go:91] Tasks: 78 done / 81 total; 3 can run
I1028 18:23:01.890273 10876 executor.go:91] Tasks: 81 done / 81 total; 0 can run
I1028 18:23:02.196422 10876 update_cluster.go:247] Exporting kubecfg for cluster
kops has set your kubectl context to example.cluster.k8s.local
Cluster changes have been applied to the cloud.
Changes may require instances to restart: kops rolling-update cluster
Determine if any of the nodes will require a restart using the command:
kops rolling-update cluster example.cluster.k8s.local
This command shows the output as shown:
NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-eu-central-1a NeedsUpdate 1 0 1 1 1
master-eu-central-1b NeedsUpdate 1 0 1 1 1
master-eu-central-1c NeedsUpdate 1 0 1 1 1
nodes NeedsUpdate 5 0 5 5 5
Must specify --yes to rolling-update.
The STATUS
column shows that both master and worker nodes need to be updated.
Perform the rolling update using the command shown:
kops rolling-update cluster example.cluster.k8s.local --yes
Output from this command is shown:
NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-eu-central-1a NeedsUpdate 1 0 1 1 1
master-eu-central-1b NeedsUpdate 1 0 1 1 1
master-eu-central-1c NeedsUpdate 1 0 1 1 1
nodes NeedsUpdate 5 0 5 5 5
I1028 18:26:37.124152 10908 instancegroups.go:350] Stopping instance "i-0c729296553079aab", node "ip-172-20-54-176.eu-central-1.compute.internal", in AWS ASG "master-eu-central-1a.masters.cluster.k8s.local".
I1028 18:31:37.439446 10908 instancegroups.go:350] Stopping instance "i-002976b15a2968b34", node "ip-172-20-71-25.eu-central-1.compute.internal", in AWS ASG "master-eu-central-1b.masters.cluster.k8s.local".
I1028 18:36:38.700513 10908 instancegroups.go:350] Stopping instance "i-0d4bd1a9668fab3e1", node "ip-172-20-112-170.eu-central-1.compute.internal", in AWS ASG "master-eu-central-1c.masters.cluster.k8s.local".
I1028 18:41:39.938149 10908 instancegroups.go:350] Stopping instance "i-0048aa89472a2c225", node "ip-172-20-93-220.eu-central-1.compute.internal", in AWS ASG "nodes.cluster.k8s.local".
I1028 18:43:41.019527 10908 instancegroups.go:350] Stopping instance "i-03787fa7fa77b9348", node "ip-172-20-117-204.eu-central-1.compute.internal", in AWS ASG "nodes.cluster.k8s.local".
I1028 19:14:50.288739 10908 instancegroups.go:350] Stopping instance "i-084c653bad3b17071", node "ip-172-20-55-115.eu-central-1.compute.internal", in AWS ASG "nodes.cluster.k8s.local".
I1028 19:16:51.339991 10908 instancegroups.go:350] Stopping instance "i-08da4ee3253afa479", node "ip-172-20-63-241.eu-central-1.compute.internal", in AWS ASG "nodes.cluster.k8s.local".
I1028 19:18:52.368412 10908 instancegroups.go:350] Stopping instance "i-0a7975621a65a1997", node "ip-172-20-91-30.eu-central-1.compute.internal", in AWS ASG "nodes.cluster.k8s.local".
I1028 19:20:53.743998 10908 rollingupdate.go:174] Rolling update completed!
Validate the cluster again:
$ kops validate cluster
Using cluster from kubectl context: example.cluster.k8s.local
Validating cluster example.cluster.k8s.local
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-central-1a Master m3.medium 1 1 eu-central-1a
master-eu-central-1b Master m3.medium 1 1 eu-central-1b
master-eu-central-1c Master c4.large 1 1 eu-central-1c
nodes Node t2.medium 5 5 eu-central-1a,eu-central-1b,eu-central-1c
NODE STATUS
NAME ROLE READY
ip-172-20-101-20.eu-central-1.compute.internal master True
ip-172-20-106-93.eu-central-1.compute.internal node True
ip-172-20-109-10.eu-central-1.compute.internal node True
ip-172-20-41-77.eu-central-1.compute.internal node True
ip-172-20-44-33.eu-central-1.compute.internal master True
ip-172-20-75-132.eu-central-1.compute.internal node True
ip-172-20-85-128.eu-central-1.compute.internal master True
ip-172-20-93-108.eu-central-1.compute.internal node True
Your cluster example.cluster.k8s.local is ready
Get the list of nodes from the cluster:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-101-20.eu-central-1.compute.internal Ready master 42m v1.7.4
ip-172-20-106-93.eu-central-1.compute.internal Ready node 36m v1.7.4
ip-172-20-109-10.eu-central-1.compute.internal Ready node 37m v1.7.4
ip-172-20-41-77.eu-central-1.compute.internal Ready node 3m v1.7.4
ip-172-20-44-33.eu-central-1.compute.internal Ready master 51m v1.7.4
ip-172-20-75-132.eu-central-1.compute.internal Ready node 5m v1.7.4
ip-172-20-85-128.eu-central-1.compute.internal Ready master 46m v1.7.4
ip-172-20-93-108.eu-central-1.compute.internal Ready node 44s v1.7.4
Upgrading cluster using blue/green method is considered more conservative in nature and takes High Availability for your application into account. You would setup two k8s cluster, one with 1.6.10 version and second with 1.7.2 and migrate your pod deployments and services into new cluster.
You are now ready to continue on with the workshop!