Skip to content

Commit

Permalink
Merge pull request #3 from dialvare/main
Browse files Browse the repository at this point in the history
4.12 content update + variables
  • Loading branch information
hgeaydem authored Feb 27, 2023
2 parents d2e60d7 + a7145d1 commit 8fd7b08
Show file tree
Hide file tree
Showing 55 changed files with 633 additions and 607 deletions.
66 changes: 31 additions & 35 deletions files/lab/workshop/content/cloning.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ In this lab we're going to clone a workload and see that it's identical to the s

- Download and customise a Fedora 34 image
- Launch it as a virtual machine via OpenShift Virtualization
- Install a basic application inside the VM.
- Install a basic application inside the VM
- Clone the VM
- Test the clone to make sure it's identical to the source

Before we begin we need to setup our Fedora 34 cloud image, let's first connect to our bastion host so we can process and serve the image from there. First ssh to bastion node (password is *%bastion-password%*):
Before we begin we need to setup our Fedora 34 cloud image, let's first connect to our bastion host so we can process and serve the image from there. First ssh to bastion node (password is *%cnvlab-bastion-password%*):

```execute-1
ssh %bastion-username%@%bastion-host%
Expand Down Expand Up @@ -46,7 +46,7 @@ It should show the following:
-rw-r--r--. 1 root root 5368709120 Apr 23 2021 %cloud-image-name-fedora%
~~~

Now we need to customise this image, we're going to do the following:
Now we need to customise this image. We're going to do the following:

* Permit root login over ssh
* Reset the root password to a secure one
Expand All @@ -70,7 +70,7 @@ Now we're ready to customise the downloaded image. First we enable ssh logins fo
virt-customize -a /var/www/html/%cloud-image-name-fedora% --run-command 'sed -i s/^#PermitRootLogin.*/PermitRootLogin\ yes/ /etc/ssh/sshd_config && touch /.autorelabel'
```

Then remove cloud-init (as we don't need it during this lab) and set the root password to something secure:
Then remove cloud-init (as we don't need it during this lab) and **set the root password** to something secure:

```copy-and-edit
virt-customize -a /var/www/html/%cloud-image-name-fedora% --uninstall=cloud-init --root-password password:<set plaintext secure password here> --ssh-inject root:file:/root/.ssh/id_rsa.pub
Expand All @@ -96,7 +96,7 @@ system:serviceaccount:workbook:cnv

> **NOTE**: Make sure that you've disconnected from the bastion machine before proceeding.
Now that we've prepared our Fedora 34 VM and placed it on an accessible location on our bastion host (for reference it's at: http://%bastion-host%:81/%cloud-image-name-fedora%), let's build a PVC for this image allowing us to build a VM from it afterwards, **one that will become our "original" or "source" virtual machine for cloning purposes**. First, make sure you're in the default project:
Now that we've prepared our Fedora 34 VM and placed it on an accessible location on our bastion host (for reference it's at: http://%bastion-host%:81/%cloud-image-name-fedora%), let's build a PVC for this image, allowing us to build a VM from it afterwards, **one that will become our "original" or "source" virtual machine for cloning purposes**. First, make sure you're in the default project:

```execute-1
oc project default
Expand Down Expand Up @@ -146,11 +146,10 @@ And make sure the claim is `Bound` and has a matching volume:

~~~bash
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
fc34-original Bound pvc-f335830c-d096-4ffa-8018-a1aac3b3cedf 40Gi RWX ocs-storagecluster-ceph-rbd 3m4s
fc34-original Bound pvc-ebc89d26-13f1-45e4-82d1-be665982df36 40Gi RWX ocs-storagecluster-ceph-rbd 27s
~~~

As before we can watch the process and see the pods (you'll need to be quick with the next two commands, as it's only a 10GB image):

As before, we can watch the process and see the pods. You'll need to be quick with the next two commands, as it's only a 10GB image:

```execute-1
oc get pod/importer-fc34-original
Expand All @@ -165,7 +164,7 @@ importer-fc34-original 1/1 Running 0 21s

Follow the importer logs:

```execute-1
```copy
oc logs importer-fc34-original -f
```

Expand Down Expand Up @@ -198,7 +197,7 @@ Name: importer-fc34-original
Namespace: default
Priority: 0
Node: ocp4-worker2.aio.example.com/192.168.123.105
Start Time: Thu, 19 Mar 2020 02:41:03 +0000
Start Time: Fri, 17 Feb 2023 14:32:03 +0000
Labels: app=containerized-data-importer
cdi.kubevirt.io=importer
cdi.kubevirt.io/storage.import.importPvcName=fc34-original
Expand Down Expand Up @@ -229,7 +228,7 @@ Volumes:

### Fedora 34 Virtual Machine

Now it's time to launch a Fedora VM based on the image we just created, that will become our original VM that we'll clone in a later step. Again we are just using the same pieces we've been using throughout the labs. For review we are using the `fc34-original` PVC we just prepared (created with CDI importing the Fedora image, stored on ODF/OCS), and we are utilising the standard bridged networking on the workers via the `tuning-bridge-fixed` construct - the same as we've been using for the other two virtual machines we created previously:
Now it's time to launch a Fedora VM based on the image we just created, that will become our original VM that we'll clone in a later step. Again we are just using the same pieces we've been using throughout the labs. For review, we are using the `fc34-original` PVC we just prepared (created with CDI importing the Fedora image, stored on ODF/OCS), and we are utilising the standard bridged networking on the workers via the `tuning-bridge-fixed` construct - the same as we've been using for the other two virtual machines we created previously:

```execute-1
cat << EOF | oc apply -f -
Expand Down Expand Up @@ -301,7 +300,7 @@ You should see the following, noting that your IP address may be different, and

~~~bash
NAME AGE PHASE IP NODENAME READY
fc34-original 65s Running 192.168.123.65 ocp4-worker3.aio.example.com True
fc34-original 81s Running 192.168.123.65 ocp4-worker1.aio.example.com True
~~~

> **NOTE:** The IP address for the Fedora 34 virtual machine may be missing in your output above as it takes a while for the `qemu-guest-agent` to report the data through to OpenShift. We also requested an SELinux relabel for the VM, which take some more time. You'll need to wait for the IP address to show before you can move to the next steps.
Expand All @@ -311,6 +310,7 @@ When you've got an IP address, we should be able to SSH to it from our terminal
```copy
ssh [email protected]
```
> **NOTE:** It will ask for the root password you set up when we customised the downloaded image.
The following tasks should be performed from the VM's shell:

Expand Down Expand Up @@ -366,7 +366,7 @@ It should show the service as "active (running)":
~~~bash
● nginx.service - Nginx Podman container
Loaded: loaded (/etc/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2020-03-31 01:30:48 UTC; 8s ago
Active: active (running) since Tue 2023-02-22 01:30:48 UTC; 8s ago
Main PID: 9898 (podman)
Tasks: 11 (limit: 2345)
~~~
Expand Down Expand Up @@ -400,9 +400,9 @@ curl http://192.168.123.65
$ curl http://192.168.123.65
Server address: 192.168.123.65:80
Server name: fedora
Date: 25/Nov/2021:15:09:21 +0000
Date: 22/Feb/2023:16:25:27 +0000
URI: /
Request ID: 517056307646f72f6d320830b6a9dfb6
Request ID: f0cdd44266c0495fa8aab6048736978b
~~~

> **NOTE**: We've purposely installed a plain-text server-side option here as we cannot route to the 192.168.123.0/24 network via the internet. In a later step we'll change this to a visual one and we'll expose it via the OpenShift ingress service and it'll be available as a route.
Expand Down Expand Up @@ -472,18 +472,15 @@ datavolume.cdi.kubevirt.io/fc34-clone created

Usually, a clone goes through a number of stages, and you can watch the progress through `CloneScheduled` and `CloneInProgress` phases. However in our case we're using OpenShift Container Storage which makes an instant clone of a volume within the storage platform and doesn't require this process.

You'll be able to view the status of the clone and its `PHASE` as "**Succeeded**". We are also able to view all PVCs including the new clone:
You'll be able to view the status of the new PVC:

```execute-1
oc get dv/fc34-clone pvc/fc34-clone
oc get pvc/fc34-clone
```

This should show both objects:

~~~bash
NAME PHASE PROGRESS RESTARTS AGE
fc34-clone Succeeded 4m

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
fc34-clone Bound pvc-3c3943f9-0a5c-4ce9-a913-782f8754f418 40Gi RWX ocs-storagecluster-ceph-rbd 83s
~~~
Expand Down Expand Up @@ -581,53 +578,52 @@ NAME AGE PHASE IP NODENAME REA
fc34-clone 88s Running 192.168.123.66 ocp4-worker2.aio.example.com True
~~~

This machine will also be visible from the OpenShift Virtualization console, which you can navigate to using the top "**Console**" button, or by using your dedicated tab if you've created one. You can login using the "**root**" user with the password you've set earliyer, by going into the "**Workloads**" --> "**Virtualization**" --> "**fc34-clone**" --> "**Console**", if you want to try:
This machine will also be visible from the OpenShift Virtualization tab. Switch to the **full OpenShift console** (%cnvlab-console-url%). In case you are not logged in, use the *kubeadmin* user and the password: %cnvlab-kubeadmin-password%. Once logged in, navigate to "**Virtualization**" "**Virtual Machines**" "**fc34-clone**" "**Console**". There, you can login using the "**root**" user with the password you've set earlier:

<img src="img/fc31-clone-console.png"/>
<img src="img/fc31-clone-console-new.png"/>

### Test the clone

Like before, we should be able to just directly connect to the VM on port 80 via `curl` and view our simple NGINX based application responding. Let's try it! Remember to use to the IP address from **your** environment as the example below may be different:

~~~copy
$ curl http://192.168.123.66
curl http://192.168.123.66
~~~

Which should show similar to the following, if our clone was successful:

~~~bash
Server address: 192.168.123.66:80
Server name: fedora
Date: 25/Nov/2021:15:58:20 +0000
Date: 22/Feb/2023:15:58:20 +0000
URI: /
Request ID: 30d16f4250df0d0d82ec2af2ebb60728
~~~

Our VM was cloned! At least the backend storage volume was cloned and we created a new virtual machine from it. Now you're probably thinking "wow, that was a lot of work just to clone a VM", and you'd be right! There's a much more simple workflow via the UI, and one that copies over all of the same configuration without us having to define a new VM ourselves. Let's first delete our clone, and then we'll move onto re-cloning the original via the UI:

```execute-1
oc delete vm/fc34-clone dv/fc34-clone pvc/fc34-clone
oc delete vm/fc34-clone pvc/fc34-clone
```

This should delete all objects at the same time:

~~~bash
virtualmachine.kubevirt.io "fc34-clone" deleted
datavolume.cdi.kubevirt.io "fc34-clone" deleted
persistentvolumeclaim "fc34-clone" deleted
~~~

Now, if we navigate to the OpenShift Console, and ensure that we're in the list of Virtual Machines by selecting "**Workloads**" --> "**Virtualization**", we should see our "*fc34-original*" VM as stopped:
Now, if we navigate to the OpenShift Console again, and ensure that we're in the list of **Virtual Machines** by selecting "**Virtualization**", we should see our "*fc34-original*" VM as stopped:

<img src="img/vm-stopped.png"/>
<img src="img/vm-stopped-new.png"/>

Select "*fc34-original*" and then from the "**Actions**" drop-down on the right hand side, select "**Clone Virtual Machine**". This will bring up a new window where we can confirm our requirements:
Select "*fc34-original*" and then from the "**Actions**" drop-down on the right hand side, select "**Clone**". This will bring up a new window where we can confirm our requirements:

<img src="img/clone-vm.png"/>
<img src="img/clone-vm-new.png"/>

We'll leave the defaults here, but make sure to select "**Start virtual machine on clone**" as this will ensure that our freshly cloned VM is automatically started for us. When you're ready, select the blue "**Clone Virtual Machine**" button at the bottom; this will create an identical virtual machine for us, just with a new name, "*fc34-original-clone*".

As soon as this happens, a new virtual machine will be created and started for you. You can see this in "**Workloads**" --> "**Virtualization**" or via the CLI:
As soon as this happens, a new virtual machine will be created and started for you. You can see this in "**Virtualization**" "**Virtual Machines**" or via the CLI:


```execute-1
Expand All @@ -648,7 +644,7 @@ Let's check our VMI list:
oc get vmi
```

Here our running VM is showing with our new IP address, in the example case it's "*192.168.123.71*":
Here our running VM is showing with our new IP address, in the example case it's "*192.168.123.66*":

~~~bash
NAME AGE PHASE IP NODENAME READY
Expand All @@ -657,16 +653,16 @@ fc34-original-clone 89s Running 192.168.123.66 ocp4-worker3.aio.example.

Like before, we should be able to confirm that it really is our clone:

~~~bash
$ curl http://192.168.123.66
~~~copy
curl http://192.168.123.66
~~~

Which should show something similar to this:

~~~bash
Server address: 192.168.123.66:80
Server name: fedora
Date: 25/Nov/2021:16:26:05 +0000
Date: 22/Feb/2023:17:25:27 +0000
URI: /
Request ID: a966b369edd1941e931d5caddcb099df
~~~
Expand Down
Loading

0 comments on commit 8fd7b08

Please sign in to comment.