diff --git a/content/en/docs/scaling-vms/_index.md b/content/en/docs/scaling-vms/_index.md index b2b7f89..cbe3cfe 100644 --- a/content/en/docs/scaling-vms/_index.md +++ b/content/en/docs/scaling-vms/_index.md @@ -3,16 +3,16 @@ title: "Scaling VMs" weight: 5 labfoldernumber: "05" description: > - Managing multiple instances of a Virtual Machine + Managing multiple instances of a virtual machine --- -Similar to regular container Pods there are cases when we want to have multiple instances of the same VM running. In this section +Similar to regular pods, there are cases when we want to have multiple instances of the same VM running. In this section we will have a look how KubeVirt supports this. -## Lab Goals +## Lab goals -* Know how to use VirtualMachine Pools +* Know how to use virtual machine pools * Know the difference for ephemeral VMs and VMs requiring persistent storage -* Create and use a pool +* Create and use a virtual machine pool diff --git a/content/en/docs/scaling-vms/vm-images.md b/content/en/docs/scaling-vms/vm-images.md index bf33707..94586d8 100644 --- a/content/en/docs/scaling-vms/vm-images.md +++ b/content/en/docs/scaling-vms/vm-images.md @@ -3,43 +3,43 @@ title: "VM disk images" weight: 51 labfoldernumber: "05" description: > - Creating disk images for scaling the virtual machines. + Creating disk images for scaling virtual machines --- -If we want to run VMs at scale it makes sense to manage a set of base images used for the VMs. It is not very convenient -for each VM to spin up and install requirements itself. There are several ways we can distribute VM images in our cluster. +If we want to run VMs at scale it makes sense to manage a set of base images to use for these VMs. It is not very convenient +to spin up and install some requirements for each single VM oneself. There are several ways we can distribute VM images in our cluster. -* Distribute images as ephemeral container disks using a container registry. +* Distribute images as ephemeral container disks using a container registry * Be aware of the non-persistent root disk - * Depending on the disk size this approach may not be the best choice. -* Create a namespace (e.g. `vm-images`) with pre-provisioned pvcs containing base disk images. - * Each VM would then use CDI to clone the pvc from the `vm-images` namespace to the local namespace. + * Depending on the disk size, this approach may not be the best choice +* Create a Namespace (e.g., `vm-images`) with pre-provisioned PVCs containing base disk images + * Each VM would then use CDI to clone the PVC from the `vm-images` Namespace to the local Namespace -At the end of this section we will have two pvcs containing base disks in our namespace: +At the end of this section, we will have two PVCs containing base disks in our Namespace: * `fedora-cloud-base`: Original Fedora Cloud * `fedora-cloud-nginx-base`: Fedora Cloud with nginx installed -## Creating a Fedora cloud image with nginx +## Creating a Fedora Cloud image with nginx -In the previous section we created a VM using cloud-init to install nginx and start the webserver. This is not very -useful as every VM in the pool would do the same initializing process and install nginx. Eventually there would be -different versions of nginx in the vm pool depending on when the VM was started and when the installation took place. +In the previous section, we created a VM using cloud-init to install nginx and start the webserver. If we created a pool based on this, each VM would go through the same initialization process and install nginx. +This is obviously not very efficient. Additionally, there would eventually be different versions of nginx in the VM pool depending on when the VM was started and the installation took place. -Let us a more generic process and create a base image with nginx already available. +In order to optimize this, let's create a base image which has nginx already installed instead of installing it during the first boot. {{% alert title="Note" color="info" %}} -Normally we would to this in a central namespace like `vm-images`. In this lab you will use your own namespace ``. +Normally we would to this in a central Namespace like `vm-images`. In this lab you will use your own namespace ``. {{% /alert %}} ### {{% task %}} Create the fedora-cloud-base disk -First we need to create our base disk for the Fedora Cloud 40. We will use a `DataVolume` and CDI to provision a PVC -containing a disk bases on the container disk `{{% param "fedoraCloudCDI" %}}`. +First we need to create our base disk for Fedora Cloud 40. We will use a `DataVolume` and CDI to provision a PVC +containing a disk base on the container disk `{{% param "fedoraCloudCDI" %}}`. + +Create the following file `dv_fedora-cloud-base.yaml` in folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}`: -Create the following file `dv_fedora-cloud-base.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}`: ```yaml apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume @@ -57,30 +57,33 @@ spec: storage: 6Gi ``` -Create the DataVolume with the following command +Create the DataVolume with the following command: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/dv_fedora-cloud-base.yaml --namespace=$USER ``` -This will download the container disk `{{% param "fedoraCloudCDI" %}}` and store it in a pvc named `fedora-cloud-base`. +This will download the container disk `{{% param "fedoraCloudCDI" %}}` and store it in a PVC named `fedora-cloud-base`: ```bash kubectl get datavolume --namespace=$USER ``` -Will result in something like: +This will result in something like this: + ```bash NAME PHASE PROGRESS RESTARTS AGE fedora-cloud-base ImportScheduled N/A ``` -and when the import process is completed +Note the `Succeeded` phase when the import process is completed: + ```bash NAME PHASE PROGRESS RESTARTS AGE fedora-cloud-base Succeeded 100.0% 105s ``` -with the following command, you can verify the existence of the PVC, containing the imported images. +With the following command, you can verify the existence of the PVC which contains the imported images: ```bash kubectl get pvc --namespace=$USER @@ -95,14 +98,15 @@ fedora-cloud-base Bound pvc-4c617a10-24f5-427c-8d11-da45723593e9 6Gi ### {{% task %}} Create the provisioner VM -Next we will create a VM which installs our packages and create the final provisioned PVC. This VM will: +Next we will create a VM which installs our packages and creates the final provisioned PVC. This VM will: * Clone the Fedora base disk `fedora-cloud-base` to our provisioned disk `fedora-cloud-nginx-base` -* Start VM and install nginx using cloud-init -* Remove cloud-init config to rerun cloud-init in further VMs cloning this disk +* Start a VM and install nginx using cloud-init +* Remove the cloud-init configuration to make it possible for further VMs cloning this disk to rerun cloud-init * Shutdown the VM Create the file `vm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-fedora-nginx-provisioner.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: + ```yaml apiVersion: kubevirt.io/v1 kind: VirtualMachine @@ -218,22 +222,24 @@ spec: ``` Create the VM with the following command: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/vm_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-fedora-nginx-provisioner.yaml --namespace=$USER ``` There are the following important details in this VM manifest: -* `runStrategy: "RerunOnFailure"`: This tells KubeVirt to run the VM like a Kubernetes job. The VM will retry as long as the guest is not shutdown gracefully. -* `cloudInitNoCloud`: These are the instructions to provision our disk. Please not the deletion of the cloud-init data to ensure it is rerun whenever we start a VM based on this disk. Further we shutdown the VM gracefully at the end of the script. +* `runStrategy: "RerunOnFailure"`: This tells KubeVirt to run the VM like a Kubernetes Job. The VM will retry as long as the guest is not shut down gracefully. +* `cloudInitNoCloud`: These are the instructions to provision our disk. Please note the deletion of the cloud-init data to ensure it is rerun whenever we start a VM based on this disk. Further we shutdown the VM gracefully at the end of the script. * `dataVolumeTemplate`: This creates a new PVC for the provisioned disk containing nginx. -As mentioned, the VM has been scheduled due to the `runStrategy: "RerunOnFailure"` and there fore the VMI should be running, use the following command to verify that: +As mentioned, the VM has been scheduled due to the `runStrategy: "RerunOnFailure"`, therefore the VMI should be running. Use the following command to verify that with: ```bash kubectl get vmi --namespace=$USER ``` -or + +or: ```bash kubectl get pod --namespace=$USER @@ -241,14 +247,14 @@ kubectl get pod --namespace=$USER {{% alert title="Note" color="info" %}} Take your time to closely inspect the cloud-init provisioning. This is a more complex version of a `cloudInitNoCloud` -configuration combining two available `userData` formats. To achieve this we used the `#cloud-config-archive` as the +configuration combining two available `userData` formats. To achieve this, we used the `#cloud-config-archive` as the parent type. This allowed us to use multiple items with different types. The first type is the regular `#cloud-config` format. For the second item we used a shell script `#!/bin/sh`. -As specified above we delete the data in `/var/lib/cloud/instances`. As this is a base image we want to run cloud-init again. +As specified above, we delete the data in `/var/lib/cloud/instances`. As this is a base image, we want to run cloud-init again. {{% /alert %}} -After the provisioning was successfully the VM will terminate itself due to the `shutdown now` statement. +After the provisioning was successful, the VM will terminate itself due to the `shutdown now` statement. ```bash kubectl get vm --namespace=$USER @@ -260,7 +266,8 @@ NAME AGE STATUS READY lab06-fedora-nginx-provisioner 8m52s Stopped False ``` -After the was shutdown we will see a `fedora-cloud-nginx-base` pvc in our namespace: +After the VM has been shut down, we will see a `fedora-cloud-nginx-base` PVC in our Namespace: + ```bash kubectl get pvc ``` @@ -271,14 +278,15 @@ fedora-cloud-base Bound pvc-c1541b25-2414-41b2-84a6-99872a19d7c4 6G fedora-cloud-nginx-base Bound pvc-27ba0e54-ff7d-4782-bd23-0823f5f3010f 6Gi RWO longhorn 9m59s ``` -The VM is still present in the namespace. As we do not need it anymore we can delete the VM. In this case we have to be +The VM is still present in the Namespace. As we do not need it anymore, we can delete the VM. In this case, we have to be careful as the fedora-cloud-nginx-base belongs to the VM and would be removed when we just delete the VM. We have to use `--cascade=orphan` to not delete our provisioned disk. -Delete the VM without deleting the newly created pvc. +Delete the VM without deleting the newly created PVC: + ```bash kubectl delete vm {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-fedora-nginx-provisioner --cascade=orphan ``` -Now we have our immutable custom VM image based on Fedora Cloud and nginx installed. We can create as many VMs as we +Now we have our immutable custom VM image based on Fedora Cloud with nginx installed. We can now create as many VMs as we want using that custom image. diff --git a/content/en/docs/scaling-vms/vm-pools.md b/content/en/docs/scaling-vms/vm-pools.md index 3ceaa83..1d8b5d2 100644 --- a/content/en/docs/scaling-vms/vm-pools.md +++ b/content/en/docs/scaling-vms/vm-pools.md @@ -3,30 +3,31 @@ title: "VirtualMachine Pools" weight: 52 labfoldernumber: "05" description: > - Using VirtualMachine Pools + Using VirtualMachinePools --- A VirtualMachinePool tries to ensure that a specified number of virtual machines are always in a ready state. -However, the VirtualMachinePool does not maintain any state or provide guarantees about the maximum number of VMs +However, the virtual machine pool does not maintain any state or provide guarantees about the maximum number of VMs running at any given time. For instance, the pool may initiate new replicas if it detects that some VMs have entered an unknown state, even if those VMs might still be running. -## Using a VirtualMachinePool +## Using a virtual machine pool -Using the custom resource `VirtualMachinePool` we can specify a template for our VM. A VirtualMachinePool consists of a -vm specification just like a regular `VirtualMachine`. This specification resides in `spec.virtualMachineTemplate.spec`. -Beside the VM specification the pool requires some additional metadata like labels to keep track of the VMs in the pool. +Using the custom resource VirtualMachinePool, we can specify a template for our VM. A VirtualMachinePool consists of a +VM specification just like a regular VirtualMachine. This specification resides in `spec.virtualMachineTemplate.spec`. +Besides the VM specification, the pool requires some additional metadata like labels to keep track of the VMs in the pool. This metadata resides in `spec.virtualMachineTemplate.metadata`. The amount of VMs we want the pool to manage is specified as `spec.replicas`. This number defaults to `1` if it is left empty. If you change the number of replicas in-flight, the controller will react to it and change the VMs running in the pool. -The pool controller needs to keep track of VMs running in this pool. This is done by specifying a `spec.selector`. This +The pool controller needs to keep track of the VMs running in its pool. This is done by specifying a `spec.selector`. This selector must match the labels in `spec.virtualMachineTemplate.metadata.labels`. -A basic `VirtualMachinePool` template looks like this: +A basic VirtualMachinePool template looks like this: + ```yaml apiVersion: pool.kubevirt.io/v1alpha1 kind: VirtualMachinePool @@ -45,13 +46,14 @@ spec: ``` {{% alert title="Note" color="info" %}} -Be aware that if `spec.selector` does not match `spec.virtualMachineTemplate.metadata.labels` the controller will do nothing -except logging an error. Further, it is your responsibility to not create two `VirtualMachinePool`s conflicting with each other. +Be aware that if `spec.selector` does not match `spec.virtualMachineTemplate.metadata.labels`, the controller will do nothing +except log an error. Further, it is your responsibility to not create two VirtualMachinePools conflicting with each other. {{% /alert %}} -To avoid conflicts a common practice is to use the label `kubevirt.io/vmpool` and simply set it to the `metadata.name` of the `VirtualMachinePool`. +To avoid conflicts, a common practice is to use the label `kubevirt.io/vmpool` and simply set it to the `metadata.name` of the `VirtualMachinePool`. As an example this could look like this: + ```yaml apiVersion: pool.kubevirt.io/v1alpha1 kind: VirtualMachinePool @@ -75,18 +77,19 @@ spec: ``` -## {{% task %}} Preparation for our Virtual Machine +## {{% task %}} Preparation for our virtual machine At the beginning of this lab we created a custom disk based on Fedora Cloud with nginx installed. We will use this image -for our VirtualMachinePool. We still have to use cloud-init to configure our login-credentials. +for our VirtualMachinePool. We still have to use cloud-init to configure our login credentials. -Since we have done this in a previous lab you can practice creating a cloud-init. The script should: +Since we have done this in a previous lab, you can practice using a cloud-init. The script should: * Set a password and configure it to not expire * Set the timezone to `Europe/Zurich` -{{% details title="Task Hint" %}} +{{% details title="Task hint" %}} Create a file `cloudinit-userdata.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: + ```yaml #cloud-config password: kubevirt @@ -95,6 +98,7 @@ timezone: Europe/Zurich ``` Create the secret: + ```bash kubectl create secret generic {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cloudinit --from-file=userdata={{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/cloudinit-userdata.yaml --namespace=$USER ``` @@ -106,6 +110,7 @@ kubectl create secret generic {{% param "labsubfolderprefix" %}}{{% param "labfo Now we have all our prerequisites in place and are ready to create our virtual machine pool. Create a file `vmpool_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` and start with the following boilerplate config: + ```yaml apiVersion: pool.kubevirt.io/v1alpha1 kind: VirtualMachinePool @@ -129,11 +134,11 @@ spec: ``` Now edit the section `spec.virtualMachineTemplate.spec` to specify your virtual machine. You can have a look at the cloud-init -vm from the previous lab. Make sure the vm has the following characteristics: +VM from the previous lab. Make sure the VM has the following characteristics: -* Use a dataVolumeTemplate to clone `fedora-cloud-nginx-base` pvc to the `{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver-disk` pvc. -* Mount this pvc as the `datavolumedisk` -* Use a `cloudInitNoCloud` named `cloudinitdisk` and reference the created secret to initialize our credentials. +* Use a dataVolumeTemplate to clone the `fedora-cloud-nginx-base` PVC to the `{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver-disk` PVC +* Mount this PVC as the `datavolumedisk` +* Use a `cloudInitNoCloud` named `cloudinitdisk` and reference the created secret to initialize our credentials {{% onlyWhen tolerations %}} @@ -144,7 +149,8 @@ Don't forget the `tolerations` from the setup chapter to make sure the VM will b {{% /onlyWhen %}} {{% details title="Task Hint" %}} -Your VirtualMachinePool should look like this (Make sure you replace `` to your username): +Your VirtualMachinePool should look like this (make sure you replace `` to your username): + ```yaml apiVersion: pool.kubevirt.io/v1alpha1 kind: VirtualMachinePool @@ -217,6 +223,7 @@ spec: {{% /details %}} Create the VirtualMachinePool with: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/vmpool_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver.yaml --namespace=$USER ``` @@ -224,28 +231,30 @@ kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" % virtualmachinepool.pool.kubevirt.io/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver created ``` -This will also automatically create two VMs and two VMIs +This will also automatically create two VMs and two VMIs: ```bash kubectl get vm --namespace=$USER ``` -or +and: ```bash kubectl get vmi --namespace=$USER ``` -As we used `spec.virtualMachineTemplate.spec.dataVolumeTemplates` the VirtualMachinePool will create a disk for each -instance in the pool. As we have configured our replicas to be `2` there should be two disks created. Each disk as its -sequential id as postfix of the disk. +As we used `spec.virtualMachineTemplate.spec.dataVolumeTemplates`, the VirtualMachinePool will create a disk for each +instance in the pool. As we have configured to have two replicas, there should also be two disks, each with its +sequential id as suffix of the disk name. + +Investigate the availability of our PVC for the webserver instances: -Investigate the availability of our pvc for the webserver instances ```bash kubectl get pvc --namespace=$USER ``` -We see the two disk images to be present in our namespace. This means that each our instance is a completely unique and +We see the two disk images to be present in our namespace. This means that each of our instances is a completely unique and independent stateful instance using its own disk. + ``` NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver-disk-0 Bound pvc-95931127-195a-4814-82d0-11d604cdceae 6Gi RWO longhorn 3m42s @@ -258,6 +267,7 @@ NAME STATUS VOLUME CA Create a service to access our webservers from within the webshell. Create a file `svc_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` with the following content: + ```yaml apiVersion: v1 kind: Service @@ -274,52 +284,55 @@ spec: ``` Apply the service with: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/svc_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver.yaml --namespace=$USER ``` + ``` service/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver created ``` -From within your webshell try to access the service using (make sure you replace `$USER` with your username): +From within your webshell, try to access the service using below command. Make sure you replace `$USER` with your username: + ```bash curl -s {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver.$USER.svc.cluster.local ``` + ``` Hello from {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver-0 GMT time: Monday, 02-Sep-2024 14:05:04 GMT Local time: Monday, 02-Sep-2024 14:05:04 UTC ``` -If you issue the request multiple times and watch for the greeting webserver. Do you see that both webservers respond in a -loadbalanced way? This is the default behaviour of kubernetes service. +Issue the request multiple times and watch for the greeting webserver. Do you see that both webservers respond in a +loadbalanced way? This is the default behaviour of Kubernetes services. ## Unique Secrets and ConfigMaps We have seen that the VirtualMachinePool created unique disks for our webserver. However, the referenced secret in the `cloudInitNoCloud` section is the same and all instances access und use the same secret. If we had used machine specific -settings in this config this would have been a problem. +settings in this config, this would be a problem. This is the default behaviour, but it can be changed using `AppendPostfixToSecretReferences` and `AppendPostfixToConfigMapReferences` in the VirtualMachinePool `spec` section. When these booleans are set to true, the VirtualMachinePool ensures that -references to Secrets or ConfigMaps have the sequential id as postfix. It is your responsibility to pre-generate the -secrets with the postfixes. +references to Secrets or ConfigMaps have the sequential id as a suffix. It is your responsibility to pre-generate the +secrets with the appropriate suffixes. ## Scaling the VirtualMachinePool -As the VirtualMachinePool implements the kubernetes standard `scale` subresource you could scale the VirtualMachinePool using -the `kubectl scale` command. +As the VirtualMachinePool implements the Kubernetes standard `scale` sub-command, you could scale the VirtualMachinePool using: ```bash kubectl scale vmpool {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver --replicas 1 --namespace=$USER ``` -## Horizontal Pod Autoscaler +## Horizontal pod autoscaler -The Horizontal Pod Autoscaler (HPA)[^1] can be used to manage the replica count depending on resource usage. +The HorizontalPodAutoscaler (HPA)[^1] resource can be used to manage the replica count depending on resource usage. ```yaml apiVersion: autoscaling/v1 @@ -342,8 +355,9 @@ This will ensure that the VirtualMachinePool is automatically scaled depending o ## {{% task %}} Scale down the VirtualMachinePool Scale down the VM pool with: + ```bash kubectl scale vmpool {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver --replicas 0 --namespace=$USER ``` -[^1]: [Horizontal Pod Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) +[^1]: [Horizontal pod autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) diff --git a/content/en/docs/scaling-vms/vm-replica-sets.md b/content/en/docs/scaling-vms/vm-replica-sets.md index df659a8..4383e0d 100644 --- a/content/en/docs/scaling-vms/vm-replica-sets.md +++ b/content/en/docs/scaling-vms/vm-replica-sets.md @@ -1,33 +1,34 @@ --- -title: "VirtualMachine ReplicaSets" +title: "Virtual machine replica sets" weight: 53 labfoldernumber: "05" description: > - Using VirtualMachine ReplicaSets + Using virtual machine replica sets --- -Just like a `VirtualMachinePool` a `VirtualMachineInstanceReplicaSet` tries to ensure that a specified number of virtual machines -are always in a ready state. The `VirtualMachineInstanceReplicaSet` is very similar to a Kubernetes ReplicaSet[^1]. +Just like a VirtualMachinePool, a VirtualMachineInstanceReplicaSet resource tries to ensure that a specified number of virtual machines +are always in a ready state. The `VirtualMachineInstanceReplicaSet` is very similar to the Kubernetes ReplicaSet[^1]. -However, the `VirtualMachineInstanceReplicaSet` does not maintain any state or provide guarantees about the maximum number of VMs -running at any given time. For instance, the `VirtualMachineInstanceReplicaSet` may initiate new replicas if it detects that some VMs have entered +However, the VirtualMachineInstanceReplicaSet does not maintain any state or provide guarantees about the maximum number of VMs +running at any given time. For instance, the VirtualMachineInstanceReplicaSet may initiate new replicas if it detects that some VMs have entered an unknown state, even if those VMs might still be running. -## Using a VirtualMachineReplicaSet +## Using a VirtualMachineInstanceReplicaSet -Using the custom resource `VirtualMachineInstanceReplicaSet` we can specify a template for our VM. A `VirtualMachineInstanceReplicaSet` consists of a -vm specification just like a regular `VirtualMachine`. This specification resides in `spec.template`. -Beside the VM specification the replica set requires some additional metadata like labels to keep track of the VMs in the replica set. +Using the custom resource VirtualMachineInstanceReplicaSet, we can specify a template for our VM. A VirtualMachineInstanceReplicaSet consists of a +VM specification just like a regular VirtualMachine. This specification resides in `spec.template`. +Besides the VM specification, the replica set requires some additional metadata like labels to keep track of the VMs in the replica set. This metadata resides in `spec.template.metadata`. -The amount of VMs we want the replica set to manage is specified as `spec.replicas`. This number defaults to `1` if it is left empty. +The amount of VMs we want the replica set to manage is specified as `spec.replicas`. This number defaults to 1 if it is left empty. If you change the number of replicas in-flight, the controller will react to it and change the VMs running in the replica set. -The replica set controller needs to keep track of VMs running in this replica set. This is done by specifying a `spec.selector`. This +The replica set controller needs to keep track of the VMs running in this replica set. This is done by specifying a `spec.selector`. This selector must match the labels in `spec.template.metadata.labels`. A basic `VirtualMachineInstanceReplicaSet` template looks like this: + ```yaml apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceReplicaSet @@ -46,11 +47,12 @@ spec: ``` {{% alert title="Note" color="info" %}} -Be aware that if `spec.selector` does not match `spec.template.metadata.labels` the controller will do nothing -except logging an error. Further, it is your responsibility to not create two `VirtualMachineInstanceReplicaSet` conflicting with each other. +Be aware that if `spec.selector` does not match `spec.template.metadata.labels`, the controller will do nothing +except log an error. Further, it is your responsibility to not create two `VirtualMachineInstanceReplicaSet` conflicting with each other. {{% /alert %}} -As an example this could look like this: +As an example, this could look like this: + ```yaml apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceReplicaSet @@ -72,9 +74,9 @@ spec: ### When to use VirtualMachineInstanceReplicaSet -You should use `VirtualMachineInstanceReplicaSet` whenever you want multiple exactly identical instances not requiring -persistent disk state. In other words you should only use replica sets if your VM is ephemeral and every used disk is -read only. If the VM writes data this should be only allowed in a tmpfs. +You should use VirtualMachineInstanceReplicaSet whenever you want multiple exactly identical instances not requiring +persistent disk state. In other words, you should only use replica sets if your VM is ephemeral and every used disk is +read-only. If the VM writes data this should only be allowed in a tmpfs. {{% alert title="Warning" color="warning" %}} You should expect data corruption if the VM writes data to a storage not being a tmpfs or an ephemeral type. @@ -82,19 +84,19 @@ You should expect data corruption if the VM writes data to a storage not being a Volume types which can safely be used with replica sets are: -* cloudInitNoCloud -* ephemeral -* containerDisk -* emptyDisk -* configMap -* secret +* `cloudInitNoCloud` +* `ephemeral` +* `containerDisk` +* `emptyDisk` +* `configMap` +* `secret` * any other type, if the VM instance writes internally to a tmpfs {{% alert title="Note" color="info" %}} -This is the most important difference to a `VirtualMachinePool`. If you want to manage multiple unique instances using -persistent storage you have to use a `VirtualMachinePool`. If you want to manage identical ephemeral instances which do +This is the most important difference to a VirtualMachinePool. If you want to manage multiple unique instances using +persistent storage, you have to use a VirtualMachinePool. If you want to manage identical ephemeral instances which do not require persistent storage or different data sources (startup scripts, configmaps, secrets) you should use a -`VirtualMachineInstanceReplicaSet`. +VirtualMachineInstanceReplicaSet. {{% /alert %}} @@ -107,6 +109,7 @@ container disks are ephemeral this fits this use case very well. ### {{% task %}} Create a VirtualMachineInstanceReplicaSet Create a file `vmirs_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros.yaml` in the folder `{{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}` and start with the following boilerplate config: + ```yaml apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceReplicaSet @@ -136,7 +139,8 @@ Enhance the `spec.template.spec` block to start a VM matching these criteria: * Request `100m` of cpu * Limit `300m` of cpu -Use this empty `cloudInitNoCloud` block to prevent cirros from trying to instantiate using a remote url: +Use this empty `cloudInitNoCloud` block to prevent cirros from trying to instantiate by using a remote url: + ```yaml - name: cloudinitdisk cloudInitNoCloud: @@ -154,6 +158,7 @@ Don't forget the `tolerations` from the setup chapter to make sure the VM will b {{% details title="Task Hint" %}} Your VirtualMachineInstanceReplicaSet should look like this: + ```yaml apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceReplicaSet @@ -211,6 +216,7 @@ spec: ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/vmirs_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros.yaml --namespace=$USER ``` + ``` virtualmachineinstancereplicaset.kubevirt.io/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicaset created ``` @@ -218,21 +224,25 @@ virtualmachineinstancereplicaset.kubevirt.io/{{% param "labsubfolderprefix" %}}{ ### {{% task %}} Access the VirtualMachineInstanceReplicaSet -There is not much the cirros disk image provides beside entering the VMs using the console. +There is not much the cirros disk image provides besides entering the VMs using the console. Check the availability of the `VirtualMachineInstanceReplicaSet` with: + ```bash kubectl get vmirs --namespace=$USER ``` + ``` NAME DESIRED CURRENT READY AGE {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicaset 2 2 2 1m ``` List the created VirtualMachineInstances using: + ```bash kubectl get vmi --namespace=$USER ``` + ``` NAME AGE PHASE IP NODENAME READY {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicasetnc5p5 11m Running 10.244.3.96 training-worker-0 True @@ -240,7 +250,8 @@ NAME AGE PHASE IP NODENAME {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicasetsn4f5 8m6s Running 10.244.3.249 training-worker-0 True ``` -You can access the console using the name of the vmi with `virtctl`: +You can access the console using the name of the VMI with `virtctl`: + ```bash virtctl console lab06-cirros-replicasetnc5p5 --namespace=$USER ``` @@ -248,17 +259,16 @@ virtctl console lab06-cirros-replicasetnc5p5 --namespace=$USER ## Scaling the VirtualMachineInstanceReplicaSet -As the VirtualMachineInstanceReplicaSet implements the kubernetes standard `scale` subresource you could scale the VirtualMachineInstanceReplicaSet using -the `kubectl scale` command. +As the VirtualMachineInstanceReplicaSet implements the Kubernetes standard `scale` sub-command, you could scale the VirtualMachineInstanceReplicaSet using: ```bash kubectl scale vmirs {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicaset --replicas 1 --namespace=$USER ``` -## Horizontal Pod Autoscaler +## Horizontal pod autoscaler -The Horizontal Pod Autoscaler (HPA)[^1] can be used to manage the replica count depending on resource usage. +The HorizontalPodAutoscaler (HPA)[^1] resource can be used to manage the replica count depending on resource usage: ```yaml apiVersion: autoscaling/v1 @@ -278,9 +288,11 @@ spec: This will ensure that the VirtualMachineInstanceReplicaSet is automatically scaled depending on the CPU utilization. You can check the consumption of your pods with: + ```bash kubectl top pod --namespace=$USER ``` + ``` NAME CPU(cores) MEMORY(bytes) user2-webshell-f8b44dfdc-92qjj 6m 188Mi @@ -307,67 +319,78 @@ spec: targetCPUUtilizationPercentage: 75 ``` -Create the Horizontal Pod Autoscaler in the cluster: +Create the HorizontalPodAutoscaler in the cluster: + ```bash kubectl apply -f {{% param "labsfoldername" %}}/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}/hpa_{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros.yaml --namespace=$USER ``` + ``` horizontalpodautoscaler.autoscaling/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicaset created ``` -Check the status of the Horizontal Pod Autoscaler with: +Check the status of the HorizontalPodAutoscaler with: + ```bash kubectl get hpa --namespace=$USER ``` + ``` NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicaset VirtualMachineInstanceReplicaSet/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicaset cpu: 2%/75% 1 2 1 7m44s ``` -Open a second termional in the webshell and connect to the console of one of your vm instances: +Open a second terminal in the webshell and connect to the console of one of your VM instances: + ```bash kubectl get vmi --namespace=$USER ``` + ``` NAME AGE PHASE IP NODENAME READY {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicasetck6rw 9m47s Running 10.244.3.171 training-worker-0 True ``` -Pick the vmi and open the console: +Pick the VMI and open the console: + ```bash virtctl console {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicaset --namespace=$USER ``` Start to generate some load. Issue the following command in your webshell: + ```bash load() { dd if=/dev/zero of=/dev/null & }; load; read; killall dd ``` In the other webshell check the following commands regularly: + ```bash kubectl top pod --namespace=$USER ``` + ```bash kubectl get hpa --namespace=$USER ``` -After a short delay the Horizontal Pod Autoscaler kicks in and scales your replica set to `2`. +After a short delay the HorizontalPodAutoscaler kicks in and scales your replica set to `2`: + ``` NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicaset VirtualMachineInstanceReplicaSet/{{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicaset cpu: 283%/75% 1 2 2 11m ``` -And you'll see a second VMI will be started: +And you will see that a second VMI is started: ```bash kubectl get vmi --namespace=$USER ``` -After the Horizontal Pod Autoscaler scaled up your instances head over to the console where you generated the load. -Hit `enter` in the console to stop the load generation. By default, the Horizontal Pod Autoscaler tries to stabilize +After the horizontal pod autoscaler scaled up your instances, head over to the console where you generated the load. +Hit `enter` in the console to stop the load generation. By default, the horizontal pod autoscaler tries to stabilize the replica set by using a `stabilizationWindowSeconds` of 300 seconds. This means that it will keep the replica set stable -for at least 300 seconds before issuing a scale down. For more information about the configuration head over to -the [Horizontal Pod Autoscaler documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). +for at least 300 seconds before issuing a scale down. For more information about the configuration, head over to +the [Horizontal pod autoscaler documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). ## End of lab @@ -375,20 +398,22 @@ the [Horizontal Pod Autoscaler documentation](https://kubernetes.io/docs/tasks/r {{% alert title="Cleanup resources" color="warning" %}} {{% param "end-of-lab-text" %}} Delete your `VirtualMachinePool`: + ```bash kubectl delete vmpool {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-webserver --namespace=$USER ``` Delete your `VirtualMachineInstanceReplicaSet`: + ```bash kubectl delete vmirs {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicaset --namespace=$USER ``` Delete the horizontal pod autoscaler + ```bash kubectl delete hpa {{% param "labsubfolderprefix" %}}{{% param "labfoldernumber" %}}-cirros-replicaset --namespace=$USER ``` {{% /alert %}} - [^1]: [Kubernetes ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/)