diff --git a/README.md b/README.md
index f42d3173..5cb8ccf1 100644
--- a/README.md
+++ b/README.md
@@ -44,7 +44,7 @@ These are notes from the [Certified Kubernetes Administrator Course](https://kod
- [02-Manual-Scheduling](docs/03-Scheduling/02-Manual-Scheduling.md)
- [03-Practice-Test-Manual-Scheduling](docs/03-Scheduling/03-Practice-Test-Manual-Scheduling.md)
- [04-Labels-and-Selectors](docs/03-Scheduling/04-Labels-and-Selectors.md)
- - [05-Practice-Test-Scheduling](docs/03-Scheduling/05-Practice-Test-Scheduling.md)
+ - [05-Practice-Test-Labels-and-Selectors](docs/03-Scheduling/05-Practice-Test-Labels-and-Selectors.md)
- [06-Taints-and-Tolerations](docs/03-Scheduling/06-Taints-and-Tolerations.md)
- [07-Practice-Test-Taints-and-Tolerations](docs/03-Scheduling/07-Practice-Test-Taints-and-Tolerations.md)
- [08-Node-Selectors](docs/03-Scheduling/08-Node-Selectors.md)
diff --git a/docs/03-Scheduling/03-Practice-Test-Manual-Scheduling.md b/docs/03-Scheduling/03-Practice-Test-Manual-Scheduling.md
index a09e1000..02f8af03 100644
--- a/docs/03-Scheduling/03-Practice-Test-Manual-Scheduling.md
+++ b/docs/03-Scheduling/03-Practice-Test-Manual-Scheduling.md
@@ -3,52 +3,67 @@
Solutions to Practice Test - Manual Scheduling
-- Run, **`kubectl create -f nginx.yaml`**
-
-
+1.
+ A pod definition file nginx.yaml is given. Create a pod using the file.
- ```
- $ kubectl create -f nginx.yaml
- ```
-
+ ```
+ kubectl create -f nginx.yaml
+ ```
+
-- Run the command 'kubectl get pods' and check the status column
+1.
+ What is the status of the created POD?
-
+ ```
+ kubectl get pods
+ ```
- ```
- $ kubectl get pods
- ```
-
+ Examine the `STATUS` column
+
-- Run the command 'kubectl get pods --namespace kube-system'
+1.
+ Why is the POD in a pending state?Inspect the environment for various kubernetes control plane components.
-
+ ```
+ kubectl get pods --namespace kube-system
+ ```
- ```
- $ kubectl get pods --namespace kube-system
- ```
-
+ There is a key pod missing here!
+
-- Set **`nodeName`** property on the pod to node01 node
+1.
+ Manually schedule the pod on node01.
-
+ We will have to delete and recereate the pod, as the only property that may be edited on a running container is `image`
- ```
- $ vi nginx.yaml
- $ kubectl delete -f nginx.yaml
- $ kubectl create -f nginx.yaml
- ```
-
+ ```
+ vi nginx.yaml
+ ```
+
+ Make the following edit
+
+ ```yaml
+ ---
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: nginx
+ spec:
+ nodeName: node01 # add this line
+ containers:
+ - image: nginx
+ name: nginx
+ ```
-- Set **`nodeName`** property on the pod to master node
+ ```
+ kubectl delete -f nginx.yaml
+ kubectl create -f nginx.yaml
+ ```
+
-
+1.
+ Now schedule the same pod on the controlplane node.
- ```
- $ vi nginx.yaml
- $ kubectl delete -f nginx.yaml
- $ kubectl create -f nginx.yaml
- ```
+ Repeat the steps as per the previous question. Edit `nodeName` to be `controlplane`
diff --git a/docs/03-Scheduling/05-Practice-Test-Labels-and-Selectors.md b/docs/03-Scheduling/05-Practice-Test-Labels-and-Selectors.md
new file mode 100644
index 00000000..bdc1ac37
--- /dev/null
+++ b/docs/03-Scheduling/05-Practice-Test-Labels-and-Selectors.md
@@ -0,0 +1,67 @@
+# Practice Test - Labels and Selectors
+ - Take me to [Practice Test](https://kodekloud.com/topic/practice-test-labels-and-selectors/)
+
+Solutions to Practice Test - Labels and Selectors
+1.
+ We have deployed a number of PODs. They are labelled with tier, env and bu. How many PODs exist in the dev environment (env)?
+
+ Here we are filtering pods by the value olf a label
+
+ ```
+ kubectl get pods --selector env=dev
+ ```
+
+ Count the pods (if any)
+
+
+
+1.
+ How many PODs are in the finance business unit (bu)?
+
+ Similarly ...
+
+ ```
+ kubectl get pods --selector bu=finance
+ ```
+
+ Count the pods (if any)
+
+
+1.
+ How many objects are in the prod environment including PODs, ReplicaSets and any other objects?
+
+ ```
+ kubectl get all --selector env=prod
+ ```
+
+ Count everything (if anything)
+
+
+1.
+ Identify the POD which is part of the prod environment, the finance BU and of frontend tier?
+
+ We can combine label expressions with comma. Only items with _all_ the given label/value pairs will be returned, i.e. it is an `and` condition.
+
+ ```
+ kubectl get all --selector env=prod,bu=finance,tier=frontend
+ ```
+
+
+1.
+ A ReplicaSet definition file is given replicaset-definition-1.yaml. Try to create the replicaset. There is an issue with the file. Try to fix it.
+
+ ```
+ kubectl create -f replicaset-definition-1.yaml
+ ```
+
+ Note the error message.
+
+ Selector matchLabels should match with POD labels - Update `replicaset-definition-2.yaml`
+
+ The values for labels on lines 9 and 13 should match.
+
+ ```
+ $ kubectl create -f replicaset-definition-2.yaml
+ ```
+
+
diff --git a/docs/03-Scheduling/05-Practice-Test-Scheduling.md b/docs/03-Scheduling/05-Practice-Test-Scheduling.md
deleted file mode 100644
index 9ebca22e..00000000
--- a/docs/03-Scheduling/05-Practice-Test-Scheduling.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Practice Test - Labels and Selectors
- - Take me to [Practice Test](https://kodekloud.com/topic/practice-test-labels-and-selectors/)
-
-Solutions to Practice Test - Labels and Selectors
-- Run the command 'kubectl get pods --selector env=dev'
-
-
-
- ```
- $ kubectl get pods --selector env=dev
- ```
-
-
-- Run the command 'kubectl get pods --selector bu=finance'
-
-
-
- ```
- $ kubectl get pods --selector bu=finance
- ```
-
-
-- Run the command 'kubectl get all --selector env=prod'
-
-
-
- ```
- $ kubectl get all --selector env=prod
- ```
-
-
-- Run the command 'kubectl get all --selector env=prod,bu=finance,tier=frontend'
-
-
-
- ```
- $ kubectl get all --selector env=prod,bu=finance,tier=frontend
- ```
-
-
-- Set the labels on the pod definition template to frontend
-
-
-
- ```
- $ vi replicaset-definition.yaml
- $ kubectl create -f replicaset-definition.yaml
- ```
-
diff --git a/docs/03-Scheduling/07-Practice-Test-Taints-and-Tolerations.md b/docs/03-Scheduling/07-Practice-Test-Taints-and-Tolerations.md
index 46565231..45714867 100644
--- a/docs/03-Scheduling/07-Practice-Test-Taints-and-Tolerations.md
+++ b/docs/03-Scheduling/07-Practice-Test-Taints-and-Tolerations.md
@@ -1,134 +1,128 @@
# Practice Test - Taints and Tolerations
- Take me to [Practice Test](https://kodekloud.com/topic/practice-test-taints-and-tolerations/)
-
+
Solutions to the Practice Test - Taints and Tolerations
-- Run the command 'kubectl get nodes' and count the number of nodes.
-
-
-
- ```
- $ kubectl get nodes
- ```
-
-
-- Run the command 'kubectl describe node node01' and see the taint property
-
-
-
- ```
- $ kubectl describe node node01
- ```
-
-
-- Run the command 'kubectl taint nodes node01 spray=mortein:NoSchedule'.
-
-
-
- ```
- $ kubectl taint nodes node01 spray=mortein:NoSchedule
- ```
-
-
-- Answer file at /var/answers/mosquito.yaml
-
-
-
- ```
- master $ cat /var/answers/mosquito.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: mosquito
- spec:
- containers:
- - image: nginx
- name: mosquito
- ```
- ```
- $ kubectl create -f /var/answers/mosquito.yaml
- ```
-
-
-- Run the command 'kubectl get pods' and see the state
-
-
-
- ```
- $ kubectl get pods
- ```
-
-
-- Why do you think the pod is in a pending state?
-
-
-
- ```
- POD Mosquito cannot tolerate taint Mortein
- ```
-
-
-- Answer file at /var/answers/bee.yaml
-
-
-
- ```
- master $ cat /var/answers/bee.yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: bee
- spec:
- containers:
- - image: nginx
- name: bee
- tolerations:
- - key: spray
- value: mortein
- effect: NoSchedule
- operator: Equal
- ```
- ```
- $ kubectl create -f /var/answers/bee.yaml
- ```
-
-
-- Notice the 'bee' pod was scheduled on node node01 despite the taint.
-
-- Run the command 'kubectl describe node master' and see the taint property
-
-
-
- ```
- $ kubectl describe node master
- ```
-
-
-- Run the command 'kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule-'.
-
-
-
- ```
- $ kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule-
- ```
-
-
-- Run the command 'kubectl get pods' and see the state
-
-
-
- ```
- $ kubectl get pods
- ```
-
-
-- Run the command 'kubectl get pods -o wide' and look at the Node column
-
-
-
- ```
- $ kubectl get pods -o wide
- ```
-
+1.
+ How many nodes exist on the system?
+
+ ```
+ $ kubectl get nodes
+ ```
+
+ Count the nodes
+
+
+
+1.
+ Do any taints exist on node01 node?
+
+ ```
+ $ kubectl describe node node01
+ ```
+
+ Find the `Taints` property in the output.
+
+
+1.
+ Create a taint on node01 with key of spray, value of mortein and effect of NoSchedule
+
+ ```
+ kubectl taint nodes node01 spray=mortein:NoSchedule
+ ```
+
+
+1.
+ Create a new pod with the nginx image and pod name as mosquito.
+
+ ```
+ kubectl run mosquito --image nginx
+ ```
+
+
+1.
+ What is the state of the POD?
+
+ ```
+ kubectl get pods
+ ```
+
+ Check the `STATUS` column
+
+
+1.
+ Why do you think the pod is in a pending state?
+
+ Mosqitoes don't like mortein!
+
+ So the answer is that the pod cannot tolerate the taint on the node.
+
+
+
+1.
+ Create another pod named bee with the nginx image, which has a toleration set to the taint mortein.
+
+ Allegedly bees are immune to mortein!
+
+ 1. Create a YAML skeleton for the pod imperatively
+
+ ```
+ kubectl run bee --image nginx --dry-run=client -o yaml > bee.yaml
+ ```
+ 1. Edit the file to add the toleration
+ ```
+ vi bee.yaml
+ ```
+ 1. Add the toleration. This goes at the same indentation level as `containers` as it is a POD setting.
+ ```yaml
+ tolerations:
+ - key: spray
+ value: mortein
+ effect: NoSchedule
+ operator: Equal
+ ```
+ 1. Save and exit, then create pod
+ ```
+ kubectl create -f bee.yaml
+ ```
+
+
+1. Information only.
+
+1.
+ Do you see any taints on controlplane node?
+
+ ```
+ kubectl describe node controlplane
+ ```
+
+ Examine the `Taints` property.
+
+
+1.
+ Remove the taint on controlplane, which currently has the taint effect of NoSchedule.
+
+ ```
+ kubectl taint nodes controlplane node-role.kubernetes.io/control-plane:NoSchedule-
+ ```
+
+
+1.
+ What is the state of the pod mosquito now?
+
+ ```
+ $ kubectl get pods
+ ```
+
+
+1.
+ Which node is the POD mosquito on now?
+
+ ```
+ $ kubectl get pods -o wide
+ ```
+
+ This also explains why the `mosquito` pod colud schedule anywhere. It also could not tolerate `controlplane` taints, which we have now removed.
+
diff --git a/docs/03-Scheduling/10-Practice-Test-Node-Affinity.md b/docs/03-Scheduling/10-Practice-Test-Node-Affinity.md
index f0f4aa5a..20d4b76f 100644
--- a/docs/03-Scheduling/10-Practice-Test-Node-Affinity.md
+++ b/docs/03-Scheduling/10-Practice-Test-Node-Affinity.md
@@ -1,111 +1,121 @@
# Practice Test - Node Affinity
- Take me to [Practice Test](https://kodekloud.com/topic/practice-test-node-affinity-2/)
-
+
Solutions to practice test - node affinity
-- Run the command 'kubectl describe node node01' and count the number of labels under **`Labels Section`**.
-
-
-
- ```
- $ kubectl describe node node01
- ```
-
-
-- Run the command 'kubectl describe node node01' and see the label section
-
-
-
- ```
- $ kubectl describe node node01
- ```
-
-
-- Run the command 'kubectl label node node01 color=blue'.
-
-
-
- ```
- $ kubectl label node node01 color=blue
- ```
-
-
-- Run the below commands
-
-
-
- ```
- $ kubectl create deployment blue --image=nginx
- $ kubectl scale deployment blue --replicas=6
- ```
-
-
-- Check if master and node01 have any taints on them that will prevent the pods to be scheduled on them. If there are no taints, the pods can be scheduled on either node.
-
-
-
- ```
- $ kubectl describe nodes|grep -i taints
- $ kubectl get pods -o wide
- ```
-
-
-- Answer file at /var/answers/blue-deployment.yaml
-
-
-
- ```
- $ kubectl edit deployment blue
- ```
-
-
- Add the below under the template.spec section
-
-
-
- ```
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: color
- operator: In
- values:
- - blue
- ```
-
-
- - Run the command 'kubectl get pods -o wide' and see the Node column
-
-
-
- ```
- $ kubectl get pods -o wide
- ```
-
-
- - Answer file at /var/answers/red-deployment.yaml
- Add the below under the template.spec section
-
-
-
- ```
- affinity:
- nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- nodeSelectorTerms:
- - matchExpressions:
- - key: node-role.kubernetes.io/master
- operator: Exists
- ```
- ```
- $ kubectl create -f red-deployment.yaml
- ```
- ```
- $ kubectl get pods -o wide
- ```
-
-
-
-
+1.
+ How many Labels exist on node node01?
+
+ ```
+ kubectl describe node node01
+ ```
+
+ Look under `Labels` section
+
+ --- OR ---
+
+ ```
+ kubectl describe node node01 --show-labels
+ ```
+
+
+
+1.
+ What is the value set to the label key beta.kubernetes.io/arch on node01?
+
+ From the output of Q1, find the answer there.
+
+
+1.
+ Apply a label color=blue to node node01
+
+ ```
+ kubectl label node node01 color=blue
+ ```
+
+
+1.
+ Create a new deployment named blue with the nginx image and 3 replicas.
+
+ ```
+ kubectl create deployment blue --image=nginx --replicas=3
+ ```
+
+
+1.
+ Which nodes can the pods for the blue deployment be placed on?
+
+
+ Check if master and node01 have any taints on them that will prevent the pods to be scheduled on them. If there are no taints, the pods can be scheduled on either node.
+
+ ```
+ kubectl describe nodes controlplane | grep -i taints
+ kubectl describe nodes node01 | grep -i taints
+ ```
+
+
+1.
+ Set Node Affinity to the deployment to place the pods on node01 only.
+ Now we edit in place the deployment we created earlier. Remember that we labelled `node01` with `color=blue`? Now we are going to create an affinity to that label, which will "attract" the pods of the deployment to it.
+
+ 1.
+ ```
+ $ kubectl edit deployment blue
+ ```
+ 1. Add the YAML below under the template.spec section, i.e. at the same level as `containers` as it is a POD setting. The affinity will be considered only during scheduling stage, however this edit will cause the deployment to roll out again.
+
+ ```yaml
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: color
+ operator: In
+ values:
+ - blue
+ ```
+
+
+1.
+ Which nodes are the pods placed on now?
+
+ ```
+ $ kubectl get pods -o wide
+ ```
+
+
+1.
+ Create a new deployment named red with the nginx image and 2 replicas, and ensure it gets placed on the controlplane node only.
+
+ 1. Create a YAML template for the deploymemt
+
+ ```
+ kubectl create deployment red --image nginx --replicas 2 --dry-run=client -o yaml > red.yaml
+ ```
+ 1. Edit the file
+ ```
+ vi red.yaml
+ ```
+ 1. Add the toleration using the label stated in the question, and placing it as before for the `blue` deployment
+ ```yaml
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: node-role.kubernetes.io/master
+ operator: Exists
+ ```
+ 1. Save, exit and create the deployment
+ ```
+ kubectl create -f red.yaml
+ ```
+ 1. Check the result
+ ```
+ $ kubectl get pods -o wide
+ ```
+
+
+
+