diff --git a/files/lab/deploy/parksmap-application-template.yaml b/files/lab/deploy/parksmap-application-template.yaml index dfc4b6e..4c3c2fe 100644 --- a/files/lab/deploy/parksmap-application-template.yaml +++ b/files/lab/deploy/parksmap-application-template.yaml @@ -361,7 +361,7 @@ objects: role: frontend spec: containers: - - image: quay.io/erkanercan/parksmap:latest + - image: quay.io/openshiftroadshow/parksmap:latest imagePullPolicy: Always name: "${PM_APPLICATION_NAME}" ports: @@ -433,4 +433,4 @@ objects: name: "${PM_APPLICATION_NAME}" weight: 100 port: - targetPort: 8080-tcp \ No newline at end of file + targetPort: 8080-tcp diff --git a/files/lab/workshop/content/cloning.md b/files/lab/workshop/content/cloning.md index 6692403..76bce24 100644 --- a/files/lab/workshop/content/cloning.md +++ b/files/lab/workshop/content/cloning.md @@ -336,7 +336,7 @@ cat >> /etc/systemd/system/nginx.service << EOF Description=Nginx Podman container Wants=syslog.service [Service] -ExecStart=/usr/bin/podman run --net=host docker.io/nginxdemos/hello:plain-text +ExecStart=/usr/bin/podman run --net=host quay.io/roxenham/nginxdemos:plain-text ExecStop=/usr/bin/podman stop --all [Install] WantedBy=multi-user.target @@ -393,12 +393,12 @@ Let's quickly verify that this works as expected. You should be able to navigate ```copy -curl http://192.168.123.69 +curl http://192.168.123.65 ``` ~~~bash -$ curl http://192.168.123.69 -Server address: 192.168.123.69:80 +$ curl http://192.168.123.65 +Server address: 192.168.123.65:80 Server name: fedora Date: 25/Nov/2021:15:09:21 +0000 URI: / @@ -568,35 +568,35 @@ fc34-clone 84s Running True fc34-original 76m Stopped False ~~~ -This machine should also get an IP address after a few minutes - it won't be the same as the original VM as the clone was given a new MAC address: +This machine should also get an IP address after a few minutes - it won't be the same as the original VM as the clone was given a new MAC address, you may need to be patient here until it shows you the IP address of the new VM: ```execute-1 oc get vmi ``` -In our example, this IP is "*192.168.123.70*": +In our example, this IP is "*192.168.123.66*": ~~~bash NAME AGE PHASE IP NODENAME READY -fc34-clone 88s Running 192.168.123.70 ocp4-worker2.aio.example.com True +fc34-clone 88s Running 192.168.123.66 ocp4-worker2.aio.example.com True ~~~ -> **Note** Give the command 2-3 minutes to report the IP. - -This machine will also be visible from the OpenShift Virtualization console. You can login using "**root/redhat**" if you want to try: +This machine will also be visible from the OpenShift Virtualization console, which you can navigate to using the top "**Console**" button, or by using your dedicated tab if you've created one. You can login using "**root/redhat**", by going into the "**Workloads**" --> "**Virtualization**" --> "**fc34-clone**" --> "**Console**", if you want to try: ### Test the clone -Like before, we should be able to just directly connect to the VM on port 80 via `curl` and view our simple NGINX based application responding. Let's try it! Remember to use to the IP address from yoir environment: +Like before, we should be able to just directly connect to the VM on port 80 via `curl` and view our simple NGINX based application responding. Let's try it! Remember to use to the IP address from **your** environment as the example below may be different: ~~~copy -$ curl http://192.168.123.70 +$ curl http://192.168.123.66 ~~~ +Which should show similar to the following, if our clone was successful: + ~~~bash -Server address: 192.168.123.70:80 +Server address: 192.168.123.66:80 Server name: fedora Date: 25/Nov/2021:15:58:20 +0000 URI: / @@ -652,17 +652,19 @@ Here our running VM is showing with our new IP address, in the example case it's ~~~bash NAME AGE PHASE IP NODENAME READY -fc34-original-clone 89s Running 192.168.123.71 ocp4-worker3.aio.example.com True +fc34-original-clone 89s Running 192.168.123.66 ocp4-worker3.aio.example.com True ~~~ Like before, we should be able to confirm that it really is our clone: ~~~bash -$ curl http://192.168.123.71 +$ curl http://192.168.123.66 ~~~ +Which should show something similar to this: + ~~~bash -Server address: 192.168.123.71:80 +Server address: 192.168.123.66:80 Server name: fedora Date: 25/Nov/2021:16:26:05 +0000 URI: / @@ -682,4 +684,4 @@ virtualmachine.kubevirt.io "fc34-original" deleted virtualmachine.kubevirt.io "fc34-original-clone" deleted ~~~ -Choose "Masquerade Networking" to continue with the lab. +Choose "**Masquerade Networking**" to continue with the lab. diff --git a/files/lab/workshop/content/deploy-application-components.md b/files/lab/workshop/content/deploy-application-components.md index 0a38c27..2dc5bf4 100644 --- a/files/lab/workshop/content/deploy-application-components.md +++ b/files/lab/workshop/content/deploy-application-components.md @@ -1,10 +1,8 @@ -In this lab, we will use OpenShift Web Console to deploy the frontend and backend components of the ParksMap application. -Parksmap application consists of one frontend web application, two backend applications and 2 databases. +In this lab, we will use the OpenShift Web Console to deploy the frontend and backend components of the ParksMap application, which comprises of one frontend web application, two backend applications and 2 databases: - ParksMap frontend web application, also called `parksmap`, and uses OpenShift's service discovery mechanism to discover the backend services deployed and shows their data on the map. -- Nationalparks backend application queries for national parks information (including their -coordinates) that is stored in a MongoDB database. +- NationalParks backend application queries for national parks information (including their coordinates) that are stored in a MongoDB database. - MLBParks backend application queries Major League Baseball stadiums in the US that are stored in an another MongoDB database. @@ -17,8 +15,7 @@ Parksmap frontend and backend components are shown in the diagram below: ### 1. Creating the Project -As a first step, we need to create a project where Parksmap application will be deployed. -You can create the project with the following command: +As a first step, we need to create a project where ParksMap application will be deployed. You can create the project with the following command: ```execute oc new-project %parksmap-project-namespace% @@ -26,22 +23,24 @@ oc new-project %parksmap-project-namespace% ### 2. Grant Service Account View Permissions -The parksmap frontend application continously monitors the **routes** of the backend applications. This requires granting additional permissions to access OpenShift API to learn about other **Pods**, **Services**, and **Route** within the **Project**. +The ParksMap frontend application continously monitors the **routes** of the backend applications. This requires granting additional permissions to access OpenShift API to learn about other **Pods**, **Services**, and **Route** within the **Project**. ```execute oc policy add-role-to-user view -z default ``` -The *oc policy* command above is giving a defined _role_ (*view*) to the default user so that applications in current project can access OpenShift API. +You should see the following output: -### 3. Login to OpenShift Web Console +~~~bash +clusterrole.rbac.authorization.k8s.io/view added: "default" +~~~ -We will use OpenShift Web Console to deploy Parksmap Web Application components. +The *oc policy* command above is giving a defined _role_ (*view*) to the default user so that applications in current project can access OpenShift API. -Please go to the [Web Console](http://console-openshift-console.%cluster_subdomain%/k8s/cluster/projects) outside the lab environment login as the kubeadmin user with the credentials you retrived previously. +### 3. Navigate to the OpenShift Web Console -> **NOTE:** As mentioned, since we require the kubeadmin user for these labs all steps need to be completed in the web console outside the lab environment. +Select the blue "**Console**" button at the top of the window to follow the steps below in the OpenShift web console as part of this lab guide. ### 4. Search for the Application Template @@ -51,7 +50,7 @@ If you are in the in the Administrator perspective, switch to Developer perspect ![parksmap-developer-persepctive](img/parksmap-developer-persepctive.png) -From the menu, select the `+Add` panel. Find the parksmap project and select it: +From the menu, select the `+Add` panel. Find the **parksmap-demo** project and select it (if you're not asked to choose a project, it's probably because you've already selected one, simply go to the "**Project**" drop down at the top and select "**All Projects**" to continue): ![parksmap-choose-project](img/parksmap-choose-project.png) @@ -63,13 +62,9 @@ You will see a screen where you have multiple options to deploy applications to
-We will be using `Templates` to deploy the application components. A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. - -A template can be processed to create anything you have permission to create within a project, for example services, build configurations, and deployment configurations. A template can also define a set of labels to apply to every object defined in the template. - -You can create a list of objects from a template using the CLI or, if a template has been uploaded to your project or the global template library, using the web console. +We will be using `Templates` to deploy the application components. A template describes a set of objects that can be parameterised and processed to produce a list of objects for creation by OpenShift Container Platform. A template can be processed to create anything you have permission to create within a project, for example services, build configurations, and deployment configurations. A template can also define a set of labels to apply to every object defined in the template. -In the `Search` text box, enter *parksmap* to find the application template. +You can create a list of objects from a template using the CLI or, if a template has been uploaded to your project or the global template library, using the web console. In the `Search` text box, enter *parksmap* to find the application template that we've already pre-loaded for you:
@@ -87,7 +82,7 @@ Then click on the `Parksmap` template to open the popup menu and then click on t
-This will open a dialog that will allow you to configure the template. This template allows you to configure the following parameters: +This will open a dialog that will *allow* you to configure the template. This template allows you to configure the following parameters: - Parksmap Web Application Name - Mlbparks Application Name @@ -100,17 +95,12 @@ This will open a dialog that will allow you to configure the template. This temp
-Next click the blue *Create* button without changing default parameters. You will be directed to the *Topology* page, where you should see the visualization for the `parksmap` deployment config in the `workshop` application. -OpenShift now creates all the Kubernetes resources to deploy the application, including *Deployment*, *Service*, and *Route*. +Next click the blue *Create* button **without changing default parameters**. You will be directed to the *Topology* page, where you should see the visualization for the `parksmap` deployment config in the `workshop` application. OpenShift now creates all the Kubernetes resources to deploy the application, including *Deployment*, *Service*, and *Route*. ### 6. Check the Application -These few steps are the only ones you need to run to all 3 application components of `parksmap` on OpenShift. - -It will take the `parksmap` application a little while to complete. - -Each OpenShift node that is asked to run the images of applications has to pull (download) it, if the node does not already have it cached locally. You can check on the status of the image download and deployment in the *Pod* details page, or from the command line with the `oc get pods` command to check the readiness of pods or you can monitor it from the Developer Console. +These few steps are the only ones you need to run to all 3 application components of `parksmap` on OpenShift. It will take a little while for the the `parksmap` application deployment to complete. Each OpenShift node that is asked to run the images of applications has to pull (download) it, if the node does not already have it cached locally. You can check on the status of the image download and deployment in the *Pod* details page, or from the command line with the `oc get pods` command to check the readiness of pods or you can monitor it from the Developer Console. Your screen will end up looking something like this:
@@ -119,7 +109,7 @@ Your screen will end up looking something like this:
-This is the *Topology* page, where you should see the visualization for the `parksmap` ,`nationalparks` and `mlbparks` deployments in the `workshop` application. +This is the *Topology* page, where you should see the visualisation for the `parksmap` ,`nationalparks` and `mlbparks` deployments in the `workshop` application. ### 7. Access the Application @@ -132,11 +122,7 @@ If you click on the `parksmap` entry in the Topology view, you will see some inf
-On the "Resources" tab, you will see that there is a single *Route* which allows external access to the `parksmap` application. While the *Services* panel provide internal abstraction and load balancing information within the OpenShift environment. - -The way that external clients are able to access applications running in OpenShift is through the OpenShift routing layer. And the data object behind that is a *Route*. - -Also note that there is a decorator icon on the `parksmap` visualization now. If you click that, it will open the URL for your *Route* in a browser: +On the "**Resources**" tab, you will see that there is a single *Route* which allows external access to the `parksmap` application. While the *Services* panel provide internal abstraction and load balancing information within the OpenShift environment. The way that external clients are able to access applications running in OpenShift is through the OpenShift routing layer. And the data object behind that is a *Route*. Also note that there is a decorator icon on the `parksmap` visualisation now. If you click that, it will open the URL for your *Route* in a browser: ![parksmap-decorator](img/parksmap-decorator.png) @@ -148,7 +134,7 @@ This application is now available at the URL shown in the Developer Perspective.
-You can notice that `parksmap` application does not show any parks as we haven't deployed database servers for the backends yet. +You can notice that `parksmap` application does not show any parks as we haven't deployed database servers for the backends yet. We'll do that in the next step, select "**Deploy first DB**" below to continue. diff --git a/files/lab/workshop/content/deploy-database-web.md b/files/lab/workshop/content/deploy-database-web.md index 08c5e2f..f353710 100644 --- a/files/lab/workshop/content/deploy-database-web.md +++ b/files/lab/workshop/content/deploy-database-web.md @@ -1,22 +1,16 @@ -In this section we will deploy and connect a MongoDB database where the -`nationalparks` application will store the location information. - -This time we are going to deploy the MongoDB application in a Virtual Machine -by leveraging OpenShift Virtualization. +In this section we will deploy and connect a MongoDB database where the`nationalparks` application will store the location information. This time we are going to deploy the MongoDB application in a Virtual Machine by leveraging OpenShift Virtualization; that way we're demonstrating the capability for OpenShift to connect multiple types of workloads, regardless of whether they're containerised, or virtualised. ### 1. Search for the Virtual Machine Template -In this module we will create MongoDB from a *Template*, which contains all the necessary Kubernetes resources and configuration to deploy and run MongoDB in a VM which is based on Centos. - -Please go back to the [Web Console](http://console-openshift-console.%cluster_subdomain%/k8s/cluster/projects) +In this module we will create a MongoDB instance from a *Template*, again based on a template that we've already pre-loaded, which contains all the necessary Kubernetes resources and configuration to deploy and run MongoDB in a VM, based on CentOS 8 - as part of a VM template that we created in an earlier step. -If you are in the in the Administrator perspective, switch to Developer perspective and go to the *%parksmap-project-namespace%* project. +For this, make sure that you're still in the web console view, and if you are in the in the *Administrator* perspective, switch to Developer perspective and go to the *%parksmap-project-namespace%* project. - From the left menu, click *+Add*. You will see a screen where you have multiple options to deploy application. -- Then Click *All Services* and in the *Search* text box and enter *mongo* to find the MongoDB VM template. +- Then Click *All Services* and in the *Search* text box and enter *mongo* to find the MongoDB VM template.
@@ -26,7 +20,7 @@ If you are in the in the Administrator perspective, switch to Developer perspect ### 2. Instantiate the Virtual Machine Template -In order to instantiate the temaplate, first click on the `MongoDB Virtual Machine` template to open the popup menu +In order to instantiate the template, first click on the `MongoDB Virtual Machine` template to open the popup menu and then click on the *Instantiate Template* button as you did when you deployed the parksmap application components. This will open a dialog that will allow you to configure the template. This template allows you to configure the following parameters: @@ -38,7 +32,7 @@ This will open a dialog that will allow you to configure the template. This temp - *Database Admin Password* -Enter *mongodb-nationalparks* in **MongoDB Application Name** field and leave other parameter values as-is. +Enter ***mongodb-nationalparks*** in **MongoDB Application Name** field and leave other parameter values default.
@@ -46,41 +40,32 @@ Enter *mongodb-nationalparks* in **MongoDB Application Name** field and leave o
-Next click the blue *Create* button. +> **NOTE**: Make sure that you update the MongoDB Application Name before clicking "Create". + +Next click the blue ***Create*** button. -You will be directed to the *Topology* page, where you should see the visualization for the `mongodb-nationalparks` virtual machine in the `workshop` application. -OpenShift creates both *VirtualMachine* and *Service* objects. `nationalparks` backend application will use this *mongodb-nationalparks service* to communicate with MongoDB. +You will be directed to the *Topology* page, where you should see the visualisation for the `mongodb-nationalparks` virtual machine in the `workshop` application, i.e. we're extending the existing application that we have. OpenShift creates both *VirtualMachine* and *Service* objects. The `nationalparks` backend application will use this *mongodb-nationalparks service* to communicate with MongoDB. ### 3. Verify the Database Service in Virtual Machine -It will take some time MongoDB VM to start and initialize. You can check the status of VM in the Web Console by clicking on the VM details in the Topology View or execute following command in the terminal +It will take some time MongoDB VM to start and initialise. You can check the status of the VM in the Web Console by clicking on the VM details in the Topology View or execute following command in the terminal page: ```execute-1 oc get vm ``` +Which, after a few minutes should show: + ~~~bash NAME AGE STATUS READY mongodb-nationalparks 45s Running True ~~~ -Once the MongoDB Virtual Machine is in a Running state, open the *Virtual Machine Console* by selecting the mongodb-nationalparks VM icon and choosing "Open Console" from the "Actions" menu: - -
- -![parksmap-opensonconsole](img/parksmap-opensonconsole.png) - -
- -Switch to *Serial Console* and wait for the login prompt: - -![parksmap-serialconsole](img/parksmap-serialconsole.png) - -On the login screen, enter the following credentials: +Once the MongoDB Virtual Machine is in a Running state, open the *Virtual Machine Console* by switching to the "**Administrator**" perspective in the top left hand corner, and then navigating to "**Workloads**" --> "**Virtualization**" and selecting "**mongodb-nationalparks**". Once there, select the "**Console**" tab and you should be able to see the virtual machine console. On the login screen, enter the following credentials: ~~~bash - Login: %mongodb-vm-username% - Password: %mongodb-vm-password% +Login: %mongodb-vm-username% +Password: %mongodb-vm-password% ~~~ Check whether *mongod* service is running by executing following: @@ -89,7 +74,7 @@ Check whether *mongod* service is running by executing following: systemctl status mongod ``` -Please verify whether *mongod* service is up and running as shown in the figure below +Please verify whether *mongod* service is up and running as shown in the figure below. If you try and do this too quickly, MongoDB might not have started yet, and you may have to wait a few minutes.
@@ -99,22 +84,17 @@ Please verify whether *mongod* service is up and running as shown in the figure ### 4. Verify Nationalparks Application -Now that we have a database deployed, we can again visit the `nationalparks` web -service to query for data: +Now that we have a database deployed, we can again visit the `nationalparks` web service to query for data: [http://nationalparks-%parksmap-project-namespace%.%cluster_subdomain%/ws/data/all](http://nationalparks-%parksmap-project-namespace%.%cluster_subdomain%/ws/data/all) -And the result? +And the result? It should just show this, an empty dataset: ~~~bash [] ~~~ -Where's the data? Think about the process you went through. You deployed the -application and then deployed the database. Nothing actually loaded anything -*INTO* the database, though. - -The application provides an endpoint to do just that: +Where's the data? Think about the process you went through. You deployed the application and then deployed the database. Nothing actually loaded anything *INTO* the database, though. The application provides an endpoint to do just that, load in some data: [http://nationalparks-%parksmap-project-namespace%.%cluster_subdomain%/ws/data/load](http://nationalparks-%parksmap-project-namespace%.%cluster_subdomain%/ws/data/load) @@ -124,24 +104,16 @@ And the result? Items inserted in database: 2893 ~~~ -If you then go back to `/ws/data/all` you will see tons of JSON data now. - -This data will be visualised on the map if you check your browser now: +If you then go back to `/ws/data/all` you will see tons of JSON data now. This data will be visualised on the map if you check your browser now: [http://parksmap-%parksmap-project-namespace%.%cluster_subdomain%](http://parksmap-%parksmap-project-namespace%.%cluster_subdomain%) -[http://parksmap-%parksmap-project-namespace%.%cluster_subdomain%](http://parksmap-%parksmap-project-namespace%.%cluster_subdomain%) - - You'll notice that the parks suddenly are showing up as below. + You'll notice that the national parks suddenly are showing up as below.
![Parksmap](img/parksmap-nationalparks-ui.png) ### 5. Understand the MongoDB Virtual Machine Template -As you've seen so far, the web console and the templates makes it very easy to deploy things onto -OpenShift. When we deploy the database virtual machine, we pass in some values for configuration. -These values are used to set the username, password, name of the database, etc... - -Let's have a look at the template definition. Execute the following command to find the template: +As you've seen so far, the web console and the templates makes it very easy to deploy things onto OpenShift, regardless of the type of workload. When we deploy the database virtual machine, we pass in some values for configuration. These values are used to set the username, password, name of the database, etc. Let's have a look at the template definition. Switch over to the terminal view and execute the following command to find the template: ```execute-1 oc get templates -n openshift| grep mongodb-vm-template @@ -150,7 +122,7 @@ oc get templates -n openshift| grep mongodb-vm-template This should list the MongoDB Virtual Machine Template we are looking for: ~~~bash -mongodb-vm-template 8 (all set) 2 +mongodb-vm-template 8 (all set) 2 ~~~ Now let's check the template definition: @@ -159,9 +131,9 @@ Now let's check the template definition: oc get template mongodb-vm-template -n openshift -o yaml ``` -There are many details, but let's focus on the `cloudInitNoCloud` section. This is the part where put the instructions to initialize the Virtual Machine. +There are many details, but let's focus on the `cloudInitNoCloud` section. This is the part where put the instructions to initialise the Virtual Machine, because remember, this VM would have started out as a vanilla template, with zero configuration. The cloud-init tool provides us with a way of injecting deploy-time instructions to our VM: ~~~yaml -... +(...) - cloudInitNoCloud: userData: |- @@ -199,13 +171,13 @@ name: cloudinitdisk ... ~~~ -Let's check the VirtualMachine object now +In the above output, you'll notice that there are placeholders for variables, if we check the VirtualMachine object now... ```execute-1  oc get vm mongodb-nationalparks -n %parksmap-project-namespace% -o yaml ``` -When we instantiate the template, OpenShift replaces the parameters with the values provided : +... we will see that when we instantiate the template, OpenShift replaces the parameters with the values provided : ~~~yaml ... @@ -244,14 +216,9 @@ When we instantiate the template, OpenShift replaces the parameters with the val ... ~~~ -OpenShift utilizes `cloud-init` which is a widely adopted project used for early initialization of a VM. Used by cloud providers such as AWS and GCP, `cloud-init` has established itself as the defacto method of providing startup scripts to VMs. - -Cloud-init documentation can be found here: -[https://cloudinit.readthedocs.io/en/latest/](https://cloudinit.readthedocs.io/en/latest/) - -OpenShift Virtualization supports cloud-init's "NoCloud" and "ConfigDrive" datasources which involve injecting startup scripts into a VM instance through the use of an ephemeral disk. VMs with the cloud-init package installed will detect the ephemeral disk and execute custom userdata scripts at boot. +OpenShift utilises `cloud-init` which is a widely adopted project used for early initialisation of a VM. Used by cloud providers such as AWS and GCP, `cloud-init` has established itself as the defacto method of providing startup scripts to VMs. Cloud-init documentation can be found here: [https://cloudinit.readthedocs.io/en/latest/](https://cloudinit.readthedocs.io/en/latest/), if you'd like to better understand its capabilities. -Other than cloud-init, OpenShift Virtualization also supports `SysPrep` which is an automation tool for Windows that automates Windows installation, setup, and custom software provisioning. +OpenShift Virtualization supports cloud-init's "NoCloud" and "ConfigDrive" datasources which involve injecting startup scripts into a VM instance through the use of an ephemeral disk. VMs with the cloud-init package installed will detect the ephemeral disk and execute custom userdata scripts at boot. Other than cloud-init, OpenShift Virtualization also supports `SysPrep` which is an automation tool for Windows that automates Windows installation, setup, and custom software provisioning. -You can automate Windows virtual machine setup by uploading answer files in XML format in the Advanced → SysPrep section of the Create virtual machine from template wizard. +You can automate Windows virtual machine setup by uploading answer files in XML format in the Advanced → SysPrep section of the "Create Virtual Machine" from template wizard, but we won't explore that in this lab. Please select "**Deploy second DB**" to continue. diff --git a/files/lab/workshop/content/deploy-database-yaml.md b/files/lab/workshop/content/deploy-database-yaml.md index acec935..4da3b06 100644 --- a/files/lab/workshop/content/deploy-database-yaml.md +++ b/files/lab/workshop/content/deploy-database-yaml.md @@ -1,95 +1,70 @@ -In this section we will deploy and connect the MongoDB database where the -`mlbparks` application which will store the location of Major League Baseball stadiums. - -This time we are going to deploy the MongoDB Virtual Machine with using command line. +In this section we will deploy an additional MongoDB database in a VM, called `mlbparks`, which will store the location of Major League Baseball stadiums; this will provide a secondary data source for our visualisation application (parksmap). This time we are going to deploy the MongoDB Virtual Machine with using command line. ### 1. Creating MongoDB Virtual Machine -If you are in the in the Administrator perspective - -Switch to *%parksmap-project-namespace%* project first by executing following command: +Make sure that you're in the "Terminal" view on the lab guide and switch to *%parksmap-project-namespace%* project by executing following command, ignoring any errors that tell you that you're already in that project: ```execute oc project %parksmap-project-namespace% ``` -And then run the following command to instantiate the template: +And then run the following command to instantiate the template, overriding the MongoDB Application name: ```execute -oc process mongodb-vm-template -p MONGODB_APPLICATION_NAME=mongodb-mlbparks -n openshift|oc create -f - +oc process mongodb-vm-template \ + -p MONGODB_APPLICATION_NAME=mongodb-mlbparks \ + -n openshift | oc create -f - ``` ### 2. Verify the Database Service in Virtual Machine -It will take some time MongoDB VM to start and initialize. You can check the status of VM in the Web Console by clicking on the VM details in the Topology View or execute following command in the terminal +It will take some time for the MongoDB VM to start and initialise, just like the first time we did it. We can watch for the status by asking OpenShift for a list of VM's: ```execute oc get vm ``` -~~~bash -NAME AGE STATUS READY -mongodb-nationalparks 45s Running True -~~~ - -After MongoDB Virtual Machine started, - -Open *Virtual Machine Console* as shown in the figure below - -Switch to *Serial Console* and wait for the login prompt. - -On the login screen, enter the following credentials: +We should now see two VM's running: ~~~bash - Login: %mongodb-vm-username% - Password: %mongodb-vm-password% +NAME AGE STATUS READY +mongodb-mlbparks 45s Running True +mongodb-nationalparks 22m Running True ~~~ -Check whether *mongod* service is running by executing following: - -```execute -systemctl status mongod -``` - -Please verify whether *mongod* service is up and running as shown in the figure below -
- -![MongoDB Service Status](img/parksmap-mlbparks-mongodb-check.png) - -
+Like before, this template is setup to utilise cloud-init to automatically bootstrap the VM with MongoDB and ensure that the service has started, so after a few minutes, the VM should be ready. ### 3. Verify Mlbparks Application -If you go back to Developer Console now, you should able to see all `parksmap application` components including the MongoDB Virtual Machines. - +If you go back to the OpenShift web-console by selecting the "**Console**" button at the top of your screen, and switch back to the *Developer* perspective, you should be able to see all `parksmap application` components including the two MongoDB Virtual Machines: +
![Parksmap Topology View](img/parksmap-topology-full.png) -Now that we have the database deployed for `mlbparks` , we can again visit the mlbparks web -service to query for data: +Now that we have the database deployed for `mlbparks` , we can again visit the mlbparks web service to query for existing data: [http://mlbparks-%parksmap-project-namespace%.%cluster_subdomain%/ws/data/all](http://mlbparks-%parksmap-project-namespace%.%cluster_subdomain%/ws/data/all) -And the result is empty as expected. +And the result is empty as expected, as we've not yet uploaded the data for the MLB Park locations: ~~~bash [] ~~~ -So to load the data go to following end point: +So to load the data, navigate to the following endpoint, which will automatically load in the data for us: [http://mlbparks-%parksmap-project-namespace%.%cluster_subdomain%/ws/data/load](http://mlbparks-%parksmap-project-namespace%.%cluster_subdomain%/ws/data/load) -Now you should see the +Now you should see the following: ~~~bash Items inserted in database: 30 ~~~ -If you check parksmap application in your browser you should be able to see the stadium locations in United States as well: +If you return to your parksmap visualisation application in your browser you should be able to see the stadium locations in United States as well, and be able to switch between MLB Parks, and National Parks: [http://parksmap-%parksmap-project-namespace%.%cluster_subdomain%](http://parksmap-%parksmap-project-namespace%.%cluster_subdomain%) @@ -97,4 +72,4 @@ If you check parksmap application in your browser you should be able to see the ![Parksmap](img/parksmap-full-view.png) - +When you're ready to proceed, select "**Backup and Restore**" below to continue with the next lab section. diff --git a/files/lab/workshop/content/hot-plug.md b/files/lab/workshop/content/hot-plug.md index e19ef56..68bcba0 100644 --- a/files/lab/workshop/content/hot-plug.md +++ b/files/lab/workshop/content/hot-plug.md @@ -1,13 +1,12 @@ ### Background: Hot-plugging virtual disks -It is expected to have **Dynamic Reconfiguration** capabilities for VMs today, such as CPU/Memory/Storage/Network hot-plug/hot-unplug. -Although these capabilities have been around for the traditional virtualization platforms, it is a particularly challenging feature to implement in a **Kubernetes** platform because of the kubernetes principle of **immutable Pods**, where once deployed they are never modified. If something needs to be changed, you never do so directly on the Pod. Instead, you’ll build and deploy a new one that has all your needed changes baked in. +It is expected to have **dynamic reconfiguration** capabilities for VMs today, such as CPU/Memory/Storage/Network hot-plug/hot-unplug. Although these capabilities have been around for the traditional virtualisation platforms, it is a particularly challenging feature to implement in a **Kubernetes** platform because of the Kubernetes principle of **immutable pods**, where once deployed they are never modified. If something needs to be changed, you never do so directly on the Pod. Instead, you’ll build and deploy a new one that has all your needed changes baked in. -OpenShift Virtualization strives to have these dynamic reconfiguration capabilities for VMs although it's a kubernetes-based platform. In the 4.9 release, Hot-plugging virtual disks to a running virtual machine is supported as a Technology Preview feature, so as a VM owner, you are able to attach and detach storage on demand. +OpenShift Virtualization strives to have these dynamic reconfiguration capabilities for VMs although it's a Kubernetes-based platform. In the 4.9 release, hot-plugging virtual disks to a running virtual machine is supported as a [Technology Preview](https://access.redhat.com/support/offerings/techpreview) feature, so as a VM owner, you are able to attach and detach storage on demand. ### Exercise: Hot-plugging a virtual disk using the web console -Hot-plug and hot-unplug virtual disks when you want to add or remove them without stopping your virtual machine or virtual machine instance. This capability is helpful when you need to add storage to a running virtual machine without incurring down-time. When you hot-plug a virtual disk, you attach a virtual disk to a virtual machine instance while the virtual machine is running. When you hot-unplug a virtual disk, you detach a virtual disk from a virtual machine instance while the virtual machine is running. Only data volumes and persistent volume claims (PVCs) can be hot-plugged and hot-unplugged. You cannot hot-plug or hot-unplug container disks. +In OpenShift Virtualization it's possible to hot-plug and hot-unplug virtual disks without stopping your virtual machine. This capability is helpful when you need to add storage to a running virtual machine without incurring down-time. When you hot-plug a virtual disk, you attach a virtual disk to a virtual machine instance while the virtual machine is running. When you hot-unplug a virtual disk, you detach a virtual disk from a virtual machine instance while the virtual machine is running. Only data volumes and persistent volume claims (PVCs) can be hot-plugged and hot-unplugged. You cannot hot-plug or hot-unplug *container* disks. -In this exercise, let's attach a new 5G disk to mongodb database vm by using the web console. +In this exercise, let's attach a new 5GB disk to one of our MongoDB database VM's by using the web console: -To verify if the new 5G disk is recognized and ready to use by the guest operating system, let's connect the console of our virtual machine and list block devices. +To verify if the new 5GB disk is recognised and ready to use by the guest operating system, let's connect the console of our virtual machine and list block devices: 1. Click **Workloads** → **Virtualization** from the side menu. @@ -40,18 +39,20 @@ To verify if the new 5G disk is recognized and ready to use by the guest operati 3. Select `mongodb-nationalparks` virtual machine to open its **Overview** screen. -4. Navigate to the "**Console**" tab. You'll be able to login with "**centos/redhat**", noting that you may have to click on the console window for it to capture your input. +4. Navigate to the "**Console**" tab. You'll be able to login with "**redhat/openshift**", noting that you may have to click on the console window for it to capture your input. -5. Once you're in the virtual machine, run the lsblk command to list block devices recognized by the operating system. +5. Once you're in the virtual machine, run the *lsblk* command to list block devices recognised by the operating system. ```copy sudo lsblk ``` +> **NOTE**: This showed up as "**sda**", as the default interface is "**scsi**" - if we'd have chosen "virtio" this would have been a "**vd***" device. + ### Exercise: Expand the VM's disk -OpenShift allows users to easily resize an existing PersistentVolumeClaim (PVC) objects. You no longer have to manually interact with the storage backend or delete and recreate PV and PVC objects to increase the size of a volume. Shrinking persistent volumes is not supported. -In this exercise, let's resize our hot-plugged 5G disk to 7G by using the web console. +OpenShift allows users to easily resize an existing PersistentVolumeClaim (PVC) objects. You no longer have to manually interact with the storage backend or delete and recreate PV and PVC objects to increase the size of a volume. Shrinking persistent volumes is not supported. In this exercise, let's resize our hot-plugged 5GB disk to 7GB by using the web console. +
@@ -32,7 +31,7 @@ Once you click Add to attach new disk, a new vm disk is automatically provisione
@@ -75,10 +76,9 @@ In this exercise, let's resize our hot-plugged 5G disk to 7G by using the web co
-Currently, an expanded disk's new size isn't automatically recognised by the OS. Instead this must be done at a lower level, which we will explain below. There is an [upstream feature](https://github.com/kubevirt/kubevirt/pull/5981) for this functionality in Kubernetes which is expected to be added ina. future version of OpenShift. +Currently, an expanded disk's **new** size isn't automatically recognised by the OS. Instead this must be done at a lower level, which we will explain below. There is an [upstream feature](https://github.com/kubevirt/kubevirt/pull/5981) for this functionality in Kubernetes which is expected to be added in a future version of OpenShift. To verify if the guest operating system has recognised the disk expansion, let's connect the console of our virtual machine and list block devices again. -To verify if the guest operating system has recognized the disk expansion, let's connect the console of our virtual machine and list block devices again. -Size of the hot-plugged disk should still listed as 5G instead of 7G. +Size of the hot-plugged disk should still be listed as 5GB instead of 7GB. 1. Click **Workloads** → **Virtualization** from the side menu. @@ -86,7 +86,7 @@ Size of the hot-plugged disk should still listed as 5G instead of 7G. 3. Select `mongodb-nationalparks` virtual machine to open its **Overview** screen. -4. Navigate to the "**Console**" tab. You'll be able to login with "**centos/redhat**", noting that you may have to click on the console window for it to capture your input. +4. Navigate to the "**Console**" tab. You'll be able to login with "**redhat/openshift**" (if you're not already logged in), noting that you may have to click on the console window for it to capture your input. 5. Once you're in the virtual machine, run the lsblk command to list block devices recognized by the operating system. ```execute @@ -96,11 +96,9 @@ sudo lsblk As we mentioned, the OS still sees this disk as 5GB. Let's fix this, but first a quick refresher from previous labs. As you'll recall OpenShift Virtualization creates one pod for each running virtual machine. This pod's primary container runs the virt-launcher. The main purpose of the virt-launcher Pod is to provide the cgroups and namespaces which will be used to host the VM process. An instance of `libvirtd` is present in every VM pod. virt-launcher uses libvirtd to manage the life-cycle of the VM process. -libvirt is an open-source API, daemon and management tool for managing platform virtualization including KVM, `virsh` is the most popular command line interface to interact with libvirt daemon `libvirtd`. - -In other words, you can manage KVM VM’s using `virsh` command line interface. +Libvirt is an open-source API, daemon and management tool for managing platform virtualization including KVM, `virsh` is the most popular command line interface to interact with libvirt daemon `libvirtd`. -To send the disk size change event we need to a guest operating system, we can execute the `virsh blockresize` command inside the virt-launcher pod of the virtual machine. +In other words, you can manage KVM VM’s using `virsh` command line interface. To send the disk size change event we need to a guest operating system, we can execute the `virsh blockresize` command inside the virt-launcher pod of the virtual machine. Let's do it! @@ -118,15 +116,19 @@ First list the running virtual machine and note it's Id. ```copy virsh list ``` +Which should show the following: + ~~~bash Id Name State --------------------------------------------------- 1 backup-test_mongodb-nationalparks running ~~~ -Now list the block devices attached to the running virtual machine with `virsh domblklist` command. +Now list the block devices attached to the running virtual machine with `virsh domblklist` command: ```copy virsh domblklist 1 ``` +This should list three volumes, the original root disk, the cloud-init disk, and our recently added hot-plug device: + ~~~bash Target Source ----------------------------------------------------------------------------------------------------------- @@ -134,25 +136,25 @@ virsh domblklist 1 vdb /var/run/kubevirt-ephemeral-disks/cloud-init-data/backup-test/mongodb-nationalparks/noCloud.iso sda /var/run/kubevirt/hotplug-disks/disk-0 ~~~ -The name of the disk we have expanded should be disk-0. You can check the name of the disk on the **Disks** tab of the virtual machine if you are not sure. -Once you identify the disk which is `sda` in our example, then run the `virsh blockresize` command to notify the guest operating system that the disk is expanded to 7 GB. +The name of the disk we have expanded should be disk-0. You can check the name of the disk on the **Disks** tab of the virtual machine if you are not sure. Once you identify the disk (which is `sda` in our example), then run the `virsh blockresize` command to notify the guest operating system that the disk is expanded to 7 GB. ```copy virsh blockresize 1 sda 7g ``` +This should return the following if successful: + ~~~bash Block device 'sda' is resized ~~~ -After executing the `virsh blockresize` command, verify by listing block devices recognized by the operating system again in the virtual machine console. +After executing the `virsh blockresize` command, verify by listing block devices recognised by the operating system again in the virtual machine console (return to the VM list, select the national parks VM, and then the "Console" tab; you should still be logged in): ```execute sudo lsblk ``` ### Exercise: Hot-unplugging a virtual disk using the web console -Hot-unplug virtual disks when you want to remove them without stopping your virtual machine or virtual machine instance. This capability is helpful when you need to remove storage from a running virtual machine without incurring down-time. When you hot-unplug a virtual disk, you detach a virtual disk from a virtual machine instance while the virtual machine is running. Only data volumes and persistent volume claims (PVCs) can be hot-unplugged. +It's possible to hot-**un**plug virtual disks when you want to remove them without stopping your virtual machine or virtual machine instance. This capability is helpful when you need to remove storage from a running virtual machine without incurring down-time. When you hot-unplug a virtual disk, you detach a virtual disk from a virtual machine instance while the virtual machine is running. Only data volumes and persistent volume claims (PVCs) can be hot-unplugged. In this exercise, let's detach the disk that we have hot-plugged in the previous exercise from our MongoDB database VM by using the web console: -In this exercise, let's detach the disk that we have hot-plugged in the previous exercise from our mongodb database vm by using the web console.
@@ -175,8 +177,7 @@ In this exercise, let's detach the disk that we have hot-plugged in the previous
-Once you click **Detach** to hot-unplug the disk it's detached from the running virtual machine and guest operating system automatically recognizes the event. -To verify if the 7G disk removal is recognized by the guest operating system, let's connect the console of our virtual machine and list block devices once again. The disk should no longer be listed by the guest operating system. +Once you click **Detach** to hot-unplug the disk, it's detached from the running virtual machine and guest operating system automatically recognises the event. To verify if the 7G disk removal is recognised by the guest operating system, let's connect the console of our virtual machine and list block devices once again. The disk should no longer be listed by the guest operating system. 1. Click **Workloads** → **Virtualization** from the side menu. @@ -184,12 +185,12 @@ To verify if the 7G disk removal is recognized by the guest operating system, le 3. Select `mongodb-nationalparks` virtual machine to open its **Overview** screen. -4. Navigate to the "**Console**" tab. You'll be able to login with "**centos/redhat**", noting that you may have to click on the console window for it to capture your input. +4. Navigate to the "**Console**" tab. You'll be able to login with "**redhat/openshift**", noting that you may have to click on the console window for it to capture your input. -5. Once you're in the virtual machine, run the lsblk command to list block devices recognized by the operating system. +5. Once you're in the virtual machine, run the lsblk command to list block devices recognised by the operating system. ```execute sudo lsblk ``` -That's it for hot-plugging and expanding virtual disks - we've hot-plugged a new 5GB disk to our mongodb database virtual machine using OpenShift web console, expanded its size to 7GB and finally hot-unplugged it from our virtual machine. +That's it for hot-plugging and expanding virtual disks - we've hot-plugged a new 5GB disk to our MongoDB database virtual machine using OpenShift web console, expanded its size to 7GB and finally hot-unplugged it from our virtual machine. To move onto the next section of the lab, click "**Network Isolation**" below. diff --git a/files/lab/workshop/content/live-migration.md b/files/lab/workshop/content/live-migration.md index fc49e39..14e84d9 100644 --- a/files/lab/workshop/content/live-migration.md +++ b/files/lab/workshop/content/live-migration.md @@ -1,4 +1,4 @@ -Live Migration is the process of moving an instance from one node in a cluster to another without interruption. This process can be manual or automatic. In OpenShift this is controlled by an `evictionStrategy` strategy. If this is set to `LiveMigrate` and the underlying node are placed into maintenance mode, VMs can be moved between them with minimal interruption. +Live Migration is the process of moving an instance from one node in a cluster to another without interruption. This process can be manual or automatic. In OpenShift this is controlled by an `evictionStrategy` strategy. If this is set to `LiveMigrate` and the underlying node are placed into maintenance mode, VMs can be automatically moved between nodes with minimal interruption. Live migration is an administrative function in OpenShift Virtualization. While the action is visible to all users, only admins can initiate a migration. Migration limits and timeouts are managed via the `kubevirt-config` `configmap`. For more details about limits see the [documentation](https://docs.openshift.com/container-platform/4.9/virt/live_migration/virt-live-migration-limits.html). @@ -8,6 +8,8 @@ In our lab we currently only have one VM running, use the `vmi` utility to view oc get vmi ``` +You should see something similar to the output below, although you may have a different IP address, and on a different node: + ~~~bash NAME AGE PHASE IP NODENAME READY rhel8-server-ocs 45h Running 192.168.123.64 ocp4-worker3.aio.example.com True @@ -19,10 +21,10 @@ In this example we can see the `rhel8-server-ocs` instance is on `ocp4-worker3.a ```execute-1 -oc describe vmi rhel8-server-ocs | egrep '(eviction|migration)' +oc describe vmi rhel8-server-ocs | egrep '(Eviction|Migration)' ``` -This command should have a similar output as below +This command should have a similar output as below, although trimmed: ~~~yaml Eviction Strategy: LiveMigrate @@ -53,7 +55,7 @@ spec: EOF ``` -Check `VirtualMachineInstanceMigration` object is created: +Check that the `VirtualMachineInstanceMigration` object is created: ~~~bash virtualmachineinstancemigration.kubevirt.io/migration-job created @@ -76,7 +78,7 @@ kind: VirtualMachineInstanceMigration spec: vmiName: rhel8-server-ocs status: - phase: Scheduling <----------- Here you can see it's scheduling + phase: Scheduling <----------- Here you can see it's scheduling ~~~ And then move to `phase: TargetReady` and onto `phase: Succeeded`: @@ -90,10 +92,10 @@ kind: VirtualMachineInstanceMigration spec: vmiName: rhel8-server-ocs status: - phase: Succeeded <----------- Now it has finished the migration + phase: Succeeded <----------- Now it has finished the migration ~~~ -Finally view the `vmi` object and you can see the new underlying host (was *ocp4-worker3*, now it's *ocp4-worker1*); your environment may have different source and destination hosts, depending on where `rhel8-server-ocs` was initially scheduled. Don't forget to `ctrl-c`out of the running watch command. +Finally view the `vmi` object and you can see the new underlying host (was *ocp4-worker3*, now it's *ocp4-worker1*); your environment may have different source and destination hosts, depending on where `rhel8-server-ocs` was initially scheduled. Don't forget to `ctrl-c` out of the running watch command: ```execute-1 oc get vmi @@ -188,13 +190,6 @@ rhel8-server-ocs 45h Running 192.168.123.64 ocp4-worker1.aio.example.com In this environment, we have one virtual machine running on *ocp4-worker1* (yours may vary). Let's take down the node for maintenance and ensure that our workload (VM) stays up and running: > **NOTE**: You may need to modify the below command to specify the worker listed in the output from above. -> -> **NOTE**: You **may** lose your browser based web terminal like this: -> -> ![live-mirate-terminal-closed](img/live-mirate-terminal-closed.png) -> -> If this happens you'll need to wait a few seconds for it to become accessible again. Try reloading the terminal from the hamburger menu in the upper right of the browser. This is because the router and/or workbook pods may be running on the worker you put into maintenance. - ```copy cat << EOF | oc apply -f - @@ -208,12 +203,18 @@ spec: EOF ``` -Check the `NodeMaintenance` object is created: +See that the `NodeMaintenance` object is created: ~~~bash nodemaintenance.nodemaintenance.kubevirt.io/worker1-maintenance ~~~ +> **NOTE**: You **may** lose your browser based web terminal like this: +> +> ![live-mirate-terminal-closed](img/live-mirate-terminal-closed.png) +> +> If this happens you'll need to wait a few seconds for it to become accessible again. Try reloading the terminal from the drop down menu in the upper right of the browser. This is because the OpenShift router and/or workbook pods may be running on the worker you put into maintenance. + Assuming you're connected back in, let's check the status of our environment: ```execute-1 @@ -232,7 +233,7 @@ And check the nodes: oc get nodes ``` -Notice that scheculing is disabled for `Worker1`: +Notice that scheduling is disabled for `worker1` (or the worker that you specified maintenance for): ~~~bash NAME STATUS ROLES AGE VERSION @@ -251,7 +252,7 @@ Now check the VMI: oc get vmi ``` -Note that the VM has been automatically live migrated back to an available worker and is not on the `SchedulingDisabled` worker, as per the `EvictionStrategy`, in this case `ocp4-worker3.aio.example.com`. +Note that the VM has been **automatically** live migrated back to an available worker and is not on the `SchedulingDisabled` worker, as per the `EvictionStrategy`, in this case `ocp4-worker3.aio.example.com`. ~~~bash NAME AGE PHASE IP NODENAME READY @@ -286,15 +287,13 @@ It should return the following output: nodemaintenance.nodemaintenance.kubevirt.io "worker1-maintenance" deleted ~~~ -Then check the node again: - -> **NOTE**: Change the below value for your environment. +Then check the same node again: ```copy oc get node/ocp4-worker1.aio.example.com ``` -Note the removal of the `SchedulingDisabled` annotation on the '**STATUS**' column. Also important is that just because this node has become active again doesn't mean the virtual machine returns to it automatically. +Note the removal of the `SchedulingDisabled` annotation on the '**STATUS**' column. Also important is that just because this node has become active again doesn't mean the virtual machine returns to it automatically, i.e. it won't "fail back", it will reside on the new host: ~~~bash NAME STATUS ROLES AGE VERSION @@ -307,6 +306,8 @@ Before proceeding let's remove the `rhel8-server-ocs` virtual machine as well as oc delete vm/rhel8-server-ocs ``` +This should be confirmed with: + ~~~bash virtualmachine.kubevirt.io "rhel8-server-ocs" deleted ~~~ @@ -323,4 +324,4 @@ It should show the removal: persistentvolumeclaim "rhel8-ocs" deleted ~~~ -Choose "Clone a Virtual Machine" to continue with the lab. +Choose "**Clone a Virtual Machine**" to continue with the lab. diff --git a/files/lab/workshop/content/masquerade.md b/files/lab/workshop/content/masquerade.md index 7d4312a..3424384 100644 --- a/files/lab/workshop/content/masquerade.md +++ b/files/lab/workshop/content/masquerade.md @@ -1,10 +1,10 @@ -Up to this point we've provisioned our virtual machines on a single bridged network using the more traditional networking models that you may typically encounter in traditional virtualisation environments. OpenShift 4.x utilises Multus as the default CNI, which permits the user to attach multiple network interfaces from different "delegate CNI's" simultaneously. Therefore, one of the models available for OpenShift Virtualization is to provide networking with a combination of attachments, including "pod networking". This mean we can have virtual machines attached to the same networks that the container pods are attached to. This has the added benefit of allowing virtual machines to leverage all of the Kubernetes models for services, load balancers, ingress, network policies, node ports, and a wide variety of other functions. +Up to this point we've provisioned our virtual machines on a single bridged network using the more traditional networking models that you may typically encounter in traditional virtualisation environments. OpenShift 4.x utilises Multus as the default CNI, which permits the user to attach multiple network interfaces from different "delegate CNI's" simultaneously. -Pod networking is also referred to as "masquerade mode" when it's related to OpenShift Virtualization, and it can be used to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the Pod network backend through a Linux bridge. Masquerade mode is the recommended binding method for VM's that need to use (or have access to) the default pod network. +Therefore, one of the models available for OpenShift Virtualization is to provide networking with a combination of attachments, including "pod networking". This mean we can have virtual machines attached to the same networks that the container pods are attached to. This has the added benefit of allowing virtual machines to leverage all of the Kubernetes models for services, load balancers, ingress, network policies, node ports, and a wide variety of other functions. -Utilising pod networking requires the interface to connect using the `masquerade: {}` method and for IPv4 addresses to be allocated via DHCP. We are going to test this with one of the same Fedora images (or PVC's) we used in the previous lab section. +Pod networking is also referred to as "**masquerade** mode" when it's related to OpenShift Virtualization, and it can be used to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the Pod network backend through a Linux bridge. Masquerade mode is the recommended binding method for VM's that need to use (or have access to) the default pod network. -In our virtual machine configuration file we need to instruct the machine to use masquerade mode for the interface (there's no command to execute here, just for your information): +Utilising pod networking requires the interface to connect using the `masquerade: {}` method and for IPv4 addresses to be allocated via DHCP. We are going to test this with one of the same Fedora images (or PVC's) we used in the previous lab section. In our virtual machine configuration file we need to instruct the machine to use masquerade mode for the interface (there's no command to execute here, just for your information): ~~~ interfaces: @@ -21,7 +21,7 @@ networks: pod: {} ~~~ -So let's go ahead and create a `VirtualMachine` using our existing Fedora 34 image via a PVC we created previously. *Look closely, we are using our cloned PVC so we get the benefits of the installed **NGINX** server, qemu-guest-agent and ssh configuration!* +So let's go ahead and create a `VirtualMachine` using our **existing** Fedora 34 image via a PVC we created previously. *Look closely, we are using our cloned PVC so we get the benefits of the installed **NGINX** server, qemu-guest-agent and ssh configuration!* ```execute-1 cat << EOF | oc apply -f - @@ -84,6 +84,8 @@ This should start a new VM: virtualmachine.kubevirt.io/fc34-podnet created ~~~ +After a few minutes, this VM should be started, and we can check with this command: + ```execute-1 oc get vmi ``` @@ -95,14 +97,14 @@ NAME AGE PHASE IP NODENAME READ fc34-podnet 68s Running 10.129.2.210 ocp4-worker2.aio.example.com True ~~~ -We can see the Virtual Machine Instance is created on the pod network, note the IP address in the 10.12x range: - -If you recall, all VMs are managed by pods, and the pod manages the networking. So we should see the same IP address on the pod associated with the VM: +We can see the Virtual Machine Instance is created on the *pod network*, note the IP address in the **10.12x** range. If you recall, all VMs are managed by pods, and the pod manages the networking. So we should see the same IP address on the pod associated with the VM: ```execute oc get pods -o wide ``` +Which clearly shows that the IP address the VM has matches the IP of the virt-launcher pod, noting that your IP addresses may be different to the example, but should match: + ~~~bash NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES virt-launcher-fc34-podnet-cxztw 1/1 Running 0 3m41s 10.129.2.210 ocp4-worker2.aio.example.com < @@ -129,20 +131,25 @@ $ oc describe pod/virt-launcher-fc34-podnet-cxztw | grep -A 9 networks-status }] ~~~ -As this lab guide is being hosted within the same cluster, you should be able to ping and connect into this VM directly from the terminal window on this IP, adjust to suit your config: +As this lab guide is running within a pod itself and being hosted within the same cluster, you should be able to ping and connect into this VM directly from the terminal window on this IP, adjust to suit your config: ```copy -ping -c1 10.128.2.27 +ping -c4 10.128.2.210 ``` +Which should return: + ~~~bash -$ ping -c1 10.129.2.210 +$ ping -c4 10.129.2.210 PING 10.129.2.210 (10.129.2.210) 56(84) bytes of data. 64 bytes from 10.129.2.210: icmp_seq=1 ttl=63 time=1.69 ms +64 bytes from 10.129.2.210: icmp_seq=2 ttl=63 time=1.69 ms +64 bytes from 10.129.2.210: icmp_seq=3 ttl=63 time=1.69 ms +64 bytes from 10.129.2.210: icmp_seq=4 ttl=63 time=1.69 ms --- 10.129.2.210 ping statistics --- -1 packets transmitted, 1 received, 0% packet loss, time 0ms +4 packets transmitted, 4 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 1.692/1.692/1.692/0.000 ms ~~~ @@ -152,7 +159,7 @@ You can also SSH to the machine (password is *%bastion-password%*): $ ssh root@10.129.2.210 ``` -Once in, take a look around: +Once in, take a look around and view the networking configuration that the guest sees: ~~~bash [root@fc34-podnet ~]# ip a s eth0 @@ -165,30 +172,34 @@ Once in, take a look around: ~~~ -When done, don't forget to exit: +*Wait*, why is this IP address **10.0.2.2** inside of the guest?! Well, in OpenShift Virtualization, every VM has the "same IP" inside the guest, and the hypervisor is bridging (**masquerading**) the pod network into the guest via a tap device. So don't be alarmed when you see the IP address being different here. -```copy +This becomes even more evident if we curl the IP address of our VM on the pod network, recalling that we installed NGINX on this VM's disk image in an earlier lab step, you'll see that we curl on the pod IP, but it shows the server address as something different. Let's leave our VM first to validate this: + +```execute-1 exit ``` Make sure you ensure you're disconnected before proceeding: ```execute-1 -$ oc whoami +oc whoami ``` +Which should show: + ~~~bash system:serviceaccount:workbook:cnv ~~~ -**Wait a second**... look at the IP address that's been assigned to our VM and the one it recognises... it's different to the one we connected on. Well, this is the masquerading in action - our host is masquerading (on all ports) from the pod network to the network given to our VM (**10.0.2.2** in our case). - -This becomes even more evident if we curl the IP address of our VM on the pod network, recalling that we installed NGINX on this VM's disk image in an earlier lab step, you'll see that we curl on the pod IP, but it shows the server address as something different: +Now if we curl the IP address on the pod network (making sure you change this IP for the one that your VM is using on the pod network, **not** **10.0.2.2**.): ~~~bash $ curl http://10.129.2.210 ~~~ +Which should show the following: + ~~~ Server address: 10.0.2.2:80 Server name: fc34-podnet @@ -197,9 +208,13 @@ URI: / Request ID: ae6332e46227c84fe604b6f5c9ec0822 ~~~ +Note the "server address" being the **10.0.2.2** address. + + + ## Exposing the VM to the outside world -In this step we're going to interface our VM to the outside world using OpenShift/Kubernetes networking constructs, namely services and routes. This makes our VM available via the OpenShift ingress service and you should be able to hit our VM from the internet. As validated in the previous step, our VM has NGINX running on port 80, so let's use the `virtctl` utility to expose the virtual machine instance on that port. +In this step we're going to interface our VM to the outside world using OpenShift/Kubernetes networking constructs, namely services and routes, demonstrating that whilst this is a VM, it's just "another" type of workload as far as OpenShift is concerned, and the same principles should be able to be applied. This step makes our VM available via the OpenShift ingress service and you should be able to hit our VM from the internet. As validated in the previous step, our VM has NGINX running on port 80, so let's use the `virtctl` utility, a CLI tool for managing OpenShift Virtualization above what `oc` provides, to expose the virtual machine instance on that port. First `expose` it on port 80 and create a service (an entrypoint) based on our VM: @@ -263,4 +278,4 @@ We've successfully exposed our VM externally to the internet via the pod network oc delete vm/fc34-podnet ``` -Choose "Templates" to move to the next lab. +Choose "**Templates**" to move to the next lab. diff --git a/files/lab/workshop/content/micro-segmentation.md b/files/lab/workshop/content/micro-segmentation.md index 87693c8..ed2e650 100644 --- a/files/lab/workshop/content/micro-segmentation.md +++ b/files/lab/workshop/content/micro-segmentation.md @@ -1,14 +1,10 @@ # Background: About network policy -Network policies allow you to configure isolation policies for individual pods. Network policies do not require administrative privileges, giving developers more control over the applications in their projects. +Network policies allow you to configure isolation policies for individual pods, i.e. limiting the ability for others to access the pod. Network policies do not require administrative privileges, giving developers more control over the applications in their projects. You can use network policies to create logical zones in the SDN that map to your organisation network zones. The benefit of this approach is that the location of running pods becomes irrelevant because network policies allow you to segregate traffic regardless of where it originates. -You can use network policies to create logical zones in the SDN that map to your organization network zones. The benefit of this approach is that the location of running pods becomes irrelevant because network policies allow you to segregate traffic regardless of where it originates. +By default, all Pods in a project are accessible from other Pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the Pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A Pod that is not selected by any NetworkPolicy objects is fully accessible. -By default, all Pods in a project are accessible from other Pods and network endpoints. To isolate one or more Pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. - -If a Pod is matched by selectors in one or more NetworkPolicy objects, then the Pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A Pod that is not selected by any NetworkPolicy objects is fully accessible. - -The following example NetworkPolicy objects demonstrate supporting different scenarios: +The following **example** (don't apply this, we're only showing this as an example) NetworkPolicy objects demonstrate supporting different scenarios: - Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic: @@ -41,18 +37,17 @@ spec: port: 443 ~~~ -# Exercise: Configuring network policy with OpenShift SDN +# Exercise: Configuring network policy -By default, all Pods in a project are accessible from other Pods and network endpoints.
-In this exercise, we'll **restrict access between pods and VMs** as seen from image below:
+By default, all pods in a project are accessible from other Pods and network endpoints. In this exercise, we'll **restrict access between pods and VMs** as seen from image below, utilising the applications that we deployed in a previous step:
-Let's verify that nationalparks could access mongodb-mlbparks and mlbparks could access to mongodb-nationalparks. +Let's verify that we currently have full unrestricted network access within our project, i.e. our "nationalparks" application can access "mongodb-mlbparks" database VM and "mlbparks" application can access to "mongodb-nationalparks" database VM - something that you may not *actually* want in a production environment, as it's an application talking to the wrong database: -1. Click **Workloads** → **Pods** from the side menu. +1. Make sure that you're in the "Administrator perspective", and click **Workloads** → **Pods** from the side menu. -2. Click **nationalparks** pod. +2. Click **nationalparks** pod (not the deploy pod, the one that's running). 3. Click the **Terminal** tab. @@ -72,8 +67,12 @@ sh-4.4$ curl mongodb-nationalparks:27017 It looks like you are trying to access MongoDB over HTTP on the native driver port. ~~~ +The output above demonstrates unrestricted network access back/forth - all applications can contact all other databases. + Now, let's apply following network policy to restrict access to **mongodb-mlbparks** from **nationalparks**. +> **NOTE**: If the formatting breaks in the output below and it doesn't validate properly in the UI, a better place to copy these from might be the original source page, [here](https://github.com/RHFieldProductManagement/ocp4_aio_role_deploy_cnvlab/blob/main/files/lab/workshop/content/micro-segmentation.md). + 1. Click **Networking** → **NetworkPolicies** from the side menu. 2. Click the **CreateNetworkPolicy** button. @@ -136,17 +135,17 @@ spec: - protocol: TCP port: 27017 ~~~ -Finally, Let's verify that nationalparks could access only mongodb-nationalparks and mlbparks could access to mongodb-mlbparks. +Finally, let's verify that the "nationalparks" application can only access the "mongodb-nationalparks" database VM and the "mlbparks" application can only access the "mongodb-mlbparks" database VM: 1. Click **Workloads** → **Pods** from the side menu. -2. Click **nationalparks** pod. +2. Click **nationalparks** pod (not the deploy VM - the one that's running) 3. Click the **Terminal** tab. 4. Run following commands and verify both mongodb services are accessible. - > Note: Curl's timeout is greater than 60 seconds. If you'd like to increase that add a `-m 5` to change it to five seconds (or another value of your choosing). + > Note: curl's timeout is greater than 60 seconds. If you'd like to increase that add a `-m 5` to change it to five seconds (or another value of your choosing). ~~~bash sh-4.4$ curl mongodb-nationalparks:27017 It looks like you are trying to access MongoDB over HTTP on the native driver port. @@ -164,3 +163,5 @@ sh-4.2$ curl mongodb-nationalparks:27017 curl: (7) Failed connect to mongodb-nationalparks:27017; Connection timed out sh-4.2$ ``` + +That's it! We're done! I hope you've enjoyed attending this lab. Any questions, please let us know! :-) diff --git a/files/lab/workshop/content/sample-application-architecture.md b/files/lab/workshop/content/sample-application-architecture.md index 2dd7ce1..c5b929c 100644 --- a/files/lab/workshop/content/sample-application-architecture.md +++ b/files/lab/workshop/content/sample-application-architecture.md @@ -1,48 +1,14 @@ -This lab introduces you to the architecture of the ParksMap application used throughout this workshop, to get a better understanding of the things you'll be doing from a developer perspective. ParksMap is a polyglot geo-spatial data visualization application built using the microservices architecture and is composed of a set of services which are developed using different programming languages and frameworks. +So far we've deployed some basic workloads and shown you the main interfaces for OpenShift Virtualization, but now we're going to move into a more realistic "real-world" scenario. This lab section introduces you to the architecture of the **ParksMap** application used throughout this workshop, to get a better understanding of the things you'll be doing from a *developer* perspective. ParksMap is a polyglot geo-spatial data visualisation application built using the microservices architecture and is composed of a set of services which are developed using different programming languages and frameworks, roughly resembling the following architecture: -The main service is a web application which has a server-side component in charge of aggregating the geo-spatial APIs provided by multiple independent backend services and a client-side component in JavaScript that is responsible for visualizing the geo-spatial data on the map. The client-side component which runs in your browser communicates with the server-side via WebSockets protocol in order to update the map in real-time. +The main service is a web application which has a server-side component in charge of aggregating the geo-spatial APIs provided by multiple independent backend services and a client-side component in JavaScript that is responsible for visualising the geo-spatial data on the map. The client-side component which runs in your browser communicates with the server-side via WebSockets protocol in order to update the map in real-time. There will be a set of independent backend services deployed that will provide different mapping and geo-spatial information. The set of available backend services that provide geo-spatial information are: * WorldWide National Parks * Major League Baseball Stadiums in North America -The original source code for this application is available [here](https://github.com/openshift-roadshow/parksmap-web). +The original source code for this application is available [here](https://github.com/openshift-roadshow/parksmap-web). The server-side component of the ParksMap web application acts as a communication gateway to all the available backends. These backends will be dynamically discovered by using service discovery mechanisms provided by OpenShift which will be discussed in more details in the following labs. The backend applications use *MongoDB* to persist map and geo-spatial information. In order to showcase how containers and virtual machines can run together in an OpenShift Environment, you will be deploying MongoDB applications as virtual machines. -The server-side component of the ParksMap web application acts as a communication gateway to all the available backends. These backends will be dynamically discovered by using service discovery mechanisms provided by OpenShift which will be discussed in more details in the following labs. - -The backend applications use MongoDB to persist map and geo-spatial information. In order to showcase how containers and virtual machines can run together in an OpenShift Environment, you will be deploying MongoDB applications as virtual machines. - -### Retrieve kubeadmin password - -For the parksmap labs we're going to be using the `kubeadmin` login for the OpenShift web console. This is necessary as we've not enabled the extra permissions to access the VM machine consoles to the lab user. To retrive the `kubeadmin` password do the following: - -From within this lab guide, SSH to the bastion node: - -```execute-1 -ssh %bastion-username%@%bastion-host% -``` - -When you see the prompt, agree to the SSH certificates by typing "yes", and then enter **%bastion-password%** as the password. Then you can execute following command to get the kubeadmin password: - -```execute-1 -echo $(cat %kubeadmin-password-file%) -``` - -Note the password (or copy it) and exit the ssh session: - -```execute-1 -exit -``` - -> **NOTE**: Make sure that you exit from this session before proceeding! - -```execute-1 -oc whoami -``` - -The above output should show "**system:serviceaccount:workbook:cnv**", if it doesn't or it shows "**system:admin**" you've not yet disconnected from the session. - -Once you've done this you can proceed to the lab. \ No newline at end of file +Select "**Deploying Parksmap**" to continue. \ No newline at end of file diff --git a/files/lab/workshop/content/snapshot-restore.md b/files/lab/workshop/content/snapshot-restore.md index 7ebf470..30d8237 100644 --- a/files/lab/workshop/content/snapshot-restore.md +++ b/files/lab/workshop/content/snapshot-restore.md @@ -1,12 +1,8 @@ -### Background: Virtual Machine snapshots +### Background: Virtual Machine Snapshots -A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a previous state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a previous development version. +A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a previous state (represented by the snapshot) for backup and disaster recovery, or to rapidly roll back to a previous development version. You can create and delete virtual machine (VM) snapshots for VMs, whether the VMs are powered off (**offline**) or on (**online**). -You can create and delete virtual machine (VM) snapshots for VMs, whether the VMs are powered off (**offline**) or on (**online**). - -When taking a snapshot of a running VM, the controller checks that the **QEMU guest agent** is installed and running. If so, it freezes the VM file system before taking the snapshot, and thaws the file system after the snapshot is taken. - -The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM and a copy of the VM specification and metadata. Snapshots cannot be changed after creation. +In OpenShift Virtualization, when taking a snapshot of a *running* VM, the controller checks that the **QEMU guest agent** is installed and running. If so, it freezes (quiesce) the file system before taking the snapshot, and thaws the file system after the snapshot is taken, allowing for crash-consistent backups. The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM, and a copy of the VM specification and metadata. Snapshots cannot be changed after creation. With the VM snapshots feature, cluster administrators and application developers can: - Create a new snapshot @@ -16,21 +12,19 @@ With the VM snapshots feature, cluster administrators and application developers OpenShift Virtualization supports VM snapshots on the following: - Red Hat OpenShift Container Storage -- Any other storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API +- Any other storage provider with the Container Storage Interface (CSI) driver that supports the *Kubernetes Volume Snapshot API* ### Exercise: Installing QEMU guest agent -To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. - -The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. +To create snapshots of an online (Running state) VM with the highest integrity, we need to install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a "best-effort" snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. -> **NOTE**: The qemu-guest-agent is widely available and available by default in Red Hat virtual machines. It might be already installed and enabled on the virtual machine used in this lab module. +> **NOTE**: The qemu-guest-agent is widely available and available by default in Red Hat virtual machines. It might be already installed and enabled on the virtual machine used in this lab module, but we'll quickly show you how to install and enable it. -1. Navigate to the OpenShift Web UI so we can access the console of the `mongodb-nationalparks` virtual machine. From the "**Administrator**" view, you'll need to select "**Workloads**" --> "**Virtualization**" --> "**Virtual Machines**" --> "**mongodb-nationalparks**" --> "**Console**". You'll be able to login with "**centos/redhat**", noting that you may have to click on the console window for it to capture your input. +1. Navigate to the OpenShift Web UI so we can access the console of the `mongodb-nationalparks` virtual machine. From the "**Administrator**" view, you'll need to select "**Workloads**" --> "**Virtualization**" --> "**Virtual Machines**" --> "**mongodb-nationalparks**" --> "**Console**". You'll be able to login with "**redhat/openshift**", noting that you may have to click on the console window for it to capture your input: > **TIP**: You might find `Serial Console` option is more responsive. -> **NOTE**: If you don't see an VMs make sure to change to the parksmap-dem project via the drop down at the top of the console. +> **NOTE**: If you don't see an VMs make sure to change to the **parksmap-demo** project via the drop down at the top of the console. 2. Once you're in the virtual machine's console install the QEMU guest agent on the virtual machine ```copy @@ -44,20 +38,20 @@ sudo systemctl enable --now qemu-guest-agent ### Exercise: Creating a virtual machine snapshot in the web console -Virtual machine (VM) snapshots can be created either by using the web console or in the CLI. In this exercise, let's create a snapshot of our mongodb database vm by using the web console. +Virtual machine (VM) snapshots can be created either by using the web console, or in the CLI. In this exercise, let's create a snapshot of our MongoDB database VM by using the web console:
1. Click **Workloads** → **Virtualization** from the side menu. -2. Click the **Virtual Machines** tab. +2. Click the **Virtual Machines** tab (if it's not already selected). 3. Select `mongodb-nationalparks` virtual machine to open its **Overview** screen. 4. Click the **Snapshots** tab and then click **Take Snapshot**. -5. Fill in the **Snapshot Name** and optional **Description** fields. +5. Fill in the **Snapshot Name** (call it whatever you like) and optional **Description** fields. 6. Because the VM has a cloud-init disk that cannot be included in the snapshot, select the **"I am aware of this warning and wish to proceed"** checkbox. @@ -67,19 +61,19 @@ Virtual machine (VM) snapshots can be created either by using the web console or
-Once you click Save to create snapshot, the vm controller checks that the QEMU guest agent is installed and running. If so, it freezes the VM file system before taking the snapshot, and initiates snapshot creation on actual storage system for each Container Storage Interface (CSI) volume attached to the VM, a copy of the VM specification and metadata is also created. +Once you click "**Save**" to create snapshot, the VM controller checks that the QEMU guest agent is installed and running. If so, it freezes the VM file system before taking the snapshot, and initiates snapshot creation on actual storage system for each Container Storage Interface (CSI) volume attached to the VM, a copy of the VM specification and metadata is also created.
-It should take just a few seconds to actually create the snapshot and make it Ready to use. Once the snapshot becomes **Ready** then it can be used to restore the virtual machine to that specific point in time then the snapshot is taken. +It should take just a few seconds to actually create the snapshot and make it "**Ready**" to use. Once the snapshot becomes **Ready** then it can be used to restore the virtual machine to that specific point in time then the snapshot is taken. ### Exercise: Destroy database -After taking an online snapshot of the database vm, let's destroy the database by forcefully deleting everything under it's data path. +After taking an online snapshot of the database VM, let's destroy the database by forcefully deleting everything under it's data path. -1. Navigate to the OpenShift Web UI so we can access the console of the `mongodb-nationalparks` virtual machine. You'll need to select "**Workloads**" --> "**Virtualization**" --> "**Virtual Machines**" --> "**mongodb-nationalparks**" --> "**Console**". You'll be able to login with "**centos/redhat**", noting that you may have to click on the console window for it to capture your input. +1. Navigate to the OpenShift Web UI so we can access the console of the `mongodb-nationalparks` virtual machine. You'll need to select "**Workloads**" --> "**Virtualization**" --> "**Virtual Machines**" --> "**mongodb-nationalparks**" --> "**Console**". You'll be able to login with "**redhat/openshift**", noting that you may have to click on the console window for it to capture your input. 2. Once you're in the virtual machine, delete everything under it's data path. ```copy @@ -92,18 +86,17 @@ sudo rm -rf /var/lib/mongo/* sudo systemctl start mongod ``` -Now you can check by refreshing `ParksMap` [web page](http://parksmap-parksmap-demo.apps.hgdmbmc.dynamic.opentlc.com/), it should now **fail** to load national parks locations from the backend service and no longer display them on the map. MLB Parks should be fine (check the USA map for these :) ). +Now you can check by refreshing `ParksMap` [web page](http://parksmap-%parksmap-project-namespace%.%cluster_subdomain%), it should now **fail** to load national parks locations from the backend service and no longer display them on the map. MLB Parks should still be fine (check the USA map for these) as these come from the other MongoDB database VM. ### Exercise: Restoring a virtual machine from a snapshot in the web console -In this exercise, let's restore our mongodb database vm by using the web console to the snapshot created in the previous exercise. -You can only restore to a powered off (offline) VM so we will first power off the virtual machine in this exercise. +In this exercise, let's restore our MongoDB database VM by using the web console to the snapshot created in the previous exercise. You can only restore to a powered off (offline) VM so we will first power off the virtual machine in this exercise. 1. Click **Workloads** → **Virtualization** from the side menu. -2. Click the **Virtual Machines** tab. +2. Click the **Virtual Machines** tab (if you're not already on it). 3. Select `mongodb-nationalparks` virtual machine to open its **Overview** screen. -4. If the mongodb-nationalparks virtual machine is running click on its name and then click **Actions** → **Stop Virtual Machine** to power it down. -5. Wait for the machine to display a "Stopped" status +4. If the **mongodb-nationalparks** virtual machine is running click on its name and then click **Actions** → **Stop Virtual Machine** to power it down. +5. Wait for the machine to display a "**Stopped**" status 6. Click the **Snapshots** tab. The page displays a list of snapshots associated with the virtual machine. 6. There are two ways to restore a snapshot in the console; you can use either here: @@ -113,16 +106,13 @@ You can only restore to a powered off (offline) VM so we will first power off th
-Once you click Restore to restore vm from the snapshot, it initiates snapshot restoration on actual storage system for each Container Storage Interface (CSI) volume attached to the VM and included in the snaphot, VM specification and metadata is also restored. -It should take just a few seconds to actually restore the snapshot and make the VM ready to be powered on again. +Once you click Restore to restore vm from the snapshot, it initiates snapshot restoration on actual storage system for each Container Storage Interface (CSI) volume attached to the VM and included in the snaphot, VM specification and metadata is also restored. It should take just a few seconds to actually restore the snapshot and make the VM ready to be powered on again - don't be alarmed if this process happens instantly; we're relying on the storage capabilities of Ceph behind the scenes to do this for us and it's incredible efficient with snapshots:
-After the snapshot was restored successfully and its status become **Ready**, you can then click **Actions** → **Start Virtual Machine** to power it on. +After the snapshot was restored successfully and its status become **Ready**, you can then click **Actions** → **Start Virtual Machine** to power it on. Once the VM is powered on and boots successfully, you can refresh `ParksMap` the [web page](http://parksmap-%parksmap-project-namespace%.%cluster_subdomain%). It should now successfully load national parks locations again from the restored backend service and start displaying them on the map again, but please note it may take a few minutes for the VM to start up and for MongoDB to start serving data again. -Once the VM is powered on and boots successfully, you can refresh `ParksMap` the [web page](http://parksmap-parksmap-demo.apps.hgdmbmc.dynamic.opentlc.com/). It should now successfully load national parks locations again from the restored backend service and start displaying them on the map again. - -### Background: Virtual machine snapshot controller and custom resource definitions (CRDs) +### Background: Virtual machine snapsho[web page](http://parksmap-%parksmap-project-namespace%.%cluster_subdomain%)t controller and custom resource definitions (CRDs) The VM snapshot feature introduces three new API objects defined as CRDs for managing snapshots: @@ -136,7 +126,7 @@ The VM snapshot controller binds a `VirtualMachineSnapshotContent` object with t ### Exercise: Creating an virtual machine snapshot in the CLI -In previous exercises in this module, we created and restored vm snapshot in the OpenShift web console. However, It's also possible to do same operations in the CLI using the CRDs above. Using CLI and Yaml/Json definitions of `VirtualMachineSnapshot` and `VirtualMachineRestore` objects to create and restore snapshot respectively, allows automating all snapshot releated operations. +In previous exercises in this module, we created and restored a VM snapshot in the OpenShift web console. However, It's also possible to do same operations in the CLI using the CRDs above. Using CLI and yaml/json definitions of `VirtualMachineSnapshot` and `VirtualMachineRestore` objects to create and restore snapshot respectively, allows automating all snapshot releated operations. In this exercise, let's create another snapshot of our mongodb database vm, this time by using the cli. @@ -144,6 +134,8 @@ In this exercise, let's create another snapshot of our mongodb database vm, this ```execute oc get vmsnapshots ``` +Which should show the following (or similar, depending on what you named your original snapshot): + ~~~bash NAME SOURCEKIND SOURCENAME PHASE READYTOUSE CREATIONTIME ERROR mongodb-nationalparks-snap0 VirtualMachine mongodb-nationalparks Succeeded true 1h @@ -164,19 +156,23 @@ spec: name: mongodb-nationalparks EOF ``` +Which should then show: + ~~~bash virtualmachinesnapshot.snapshot.kubevirt.io/mongodb-nationalparks-snap1 created ~~~ -3. **Optional**: As in the previous exercise, the snapshot creation will take a few seconds in the background, and you can use the wait command and monitor the status of the snapshot. +3. **Optional**: As in the previous exercise, the snapshot creation will take a few seconds in the background, and you can use the wait command and monitor the status of the snapshot, although this may immediately signal that the condition has been met, which is confirmation that the snapshot has been successfully taken. ```execute oc wait vmsnapshot mongodb-nationalparks-snap1 --for condition=Ready ``` -4. List the existing snapshots in the project again to verify that the new vm snapshot is created successfully. +4. List the existing snapshots in the project again to verify that the new vm snapshot is created successfully: ```execute oc get vmsnapshots ``` +This should now show two snapshots, one you just created via the CLI, and the other the one we created via the UI earlier: + ~~~bash NAME SOURCEKIND SOURCENAME PHASE READYTOUSE CREATIONTIME ERROR mongodb-nationalparks-snap0 VirtualMachine mongodb-nationalparks Succeeded true 1h @@ -188,6 +184,8 @@ mongodb-nationalparks-snap1 VirtualMachine mongodb-nationalparks Succeeded oc describe vmsnapshot mongodb-nationalparks-snap1 ``` +Which should show: + ~~~bash Name: mongodb-nationalparks-snap1 Namespace: backup-test @@ -233,8 +231,7 @@ Events: Normal SuccessfulVirtualMachineSnapshotContentCreate 5m45s snapshot-controller Successfully created VirtualMachineSnapshotContent vmsnapshot-content-7a46dfc9-9904-42e9-a0a3-c02ef43d0f2b ~~~ -6. `VirtualMachineSnapshotContent` objects represent a provisioned resource on the cluster, a vm snapshot in our case. It is created by the VM snapshot controller and contains references to all resources required to restore the VM. The underlying kubernetes StorageClass, PersistentVolume, VolumeSnapshot objects used and created for each attached disk and VM's metadata information is stored in the `VirtualMachineSnapshotContent` object. So it contains all the information needed to restore the VM to that specific point in time that snapshot is taken. -You can see these details by describing the VirtualMachineSnapshotContent bound to our vm snapshot. This value for your environment is provided at the bottom of the previous command. +6. `VirtualMachineSnapshotContent` objects represent a provisioned resource on the cluster, a VM snapshot in our case. It is created by the VM snapshot controller and contains references to all resources required to restore the VM. The underlying Kubernetes StorageClass, PersistentVolume(s), VolumeSnapshot objects used and created for each attached disk, and VM's metadata information is stored in the `VirtualMachineSnapshotContent` object. So it contains all the information needed to restore the VM to that specific point in time that snapshot is taken. You can see these details by describing the `VirtualMachineSnapshotContent` bound to our VM snapshot. This value for your environment is provided at the bottom of the previous command. ```copy oc describe vmsnapshotcontent vmsnapshot-content-7a46dfc9-9904-42e9-a0a3-c02ef43d0f2b ``` @@ -243,7 +240,7 @@ oc describe vmsnapshotcontent vmsnapshot-content-7a46dfc9-9904-42e9-a0a3-c02ef43 To see how to restore the VM in the CLI, let's delete the VM's boot disk completely this time after powering of the VM. -1. Click **Workloads** → **Virtualization** from the side menu. +1. Return to the **web console** briefly and click **Workloads** → **Virtualization** from the side menu. 2. Click the **Virtual Machines** tab. @@ -253,29 +250,30 @@ To see how to restore the VM in the CLI, let's delete the VM's boot disk complet 5. Click the **Disks** tab. The page displays a list of disks attached to the virtual machine. -6. Select the **disk** named `mongodb-nationalparks` which is the boot disk of our database VM, and click the Options menu kebab and select **Delete**. +6. Select the **disk** named `mongodb-nationalparks-rootdisk` which is the boot disk of our database VM, and click the Options menu kebab and select **Delete**. 7. In the confirmation pop-up window, select the **Delete DataVolume and PVC** checkbox and click **Detach** to delete the disk completely.
-Now you can check by refreshing `ParksMap` [web page](http://parksmap-parksmap-demo.apps.hgdmbmc.dynamic.opentlc.com/), it should **fail** to load national parks locations from the backend service and no longer display them on the map. +Now you can check by refreshing `ParksMap` [web page](http://parksmap-%parksmap-project-namespace%.%cluster_subdomain%), and as before it should **fail** to load national parks locations from the backend service and no longer display them on the map. # Exercise: Restore the VM's boot disk -In this exercise, let's restore our mongodb database vm by using the CLI to the snapshot created in the previous exercise. -You can only restore to a powered off (offline) VM so we will first power off the virtual machine in this exercise. +In this exercise, let's restore our MongoDB database VM (by using the CLI) to the snapshot created in the previous exercise. -1. List the existing vmrestore objects in the project. There should be already a vmrestore object in the project because we initiated one in the previous exercise using the web console. +1. List the existing vmrestore objects in the project. There should be already a `vmrestore` object in the project because we initiated one in the previous exercise using the web console. ```execute oc get vmrestores ``` +Which should show the following, although don't be alarmed if you have multiple here - perhaps you clicked the "Restore" button multiple times in the UI previously: + ~~~bash NAME TARGETKIND TARGETNAME COMPLETE RESTORETIME ERROR mongodb-nationalparks-snap0-restore-pb4mbl VirtualMachine mongodb-nationalparks true 1h ~~~ -2. Create a `VirtualMachineRestore` object that specifies the name of the VM we want to restore and the name of the snapshot to be used as the source. The name of the vm and it's snapshot will be `mongodb-nationalparks` and `mongodb-nationalparks-snap1` in this example respectively. Right after creating the `VirtualMachineRestore` object, the snapshot controller updates the status fields of the VirtualMachineRestore object and replaces the existing VM configuration with the snapshot content. +2. Create a `VirtualMachineRestore` object that specifies the name of the VM we want to restore and the name of the snapshot to be used as the source. The name of the VM and it's snapshot will be `mongodb-nationalparks` and `mongodb-nationalparks-snap1` in this example respectively. Right after creating the `VirtualMachineRestore` object, the snapshot controller updates the status fields of the VirtualMachineRestore object and replaces the existing VM configuration with the snapshot content. ```execute-1 cat << EOF | oc apply -f - apiVersion: snapshot.kubevirt.io/v1alpha1 @@ -290,11 +288,13 @@ spec: virtualMachineSnapshotName: mongodb-nationalparks-snap1 EOF ``` +Which should show that the `virtualmachinerestore` object has been created: + ~~~bash virtualmachinerestore.snapshot.kubevirt.io/mongodb-nationalparks-vmrestore1 created ~~~ -3. **Optional**: As in the previous exercise, the vm restoration will take a little seconds in the background. You can use the wait command and monitor the status of the snapshot. +3. **Optional**: As in the previous exercise, the VM restoration will take a little seconds in the background. You can use the wait command and monitor the status of the snapshot. ```execute oc wait vmrestore mongodb-nationalparks-vmrestore1 --for condition=Ready ``` @@ -303,6 +303,8 @@ oc wait vmrestore mongodb-nationalparks-vmrestore1 --for condition=Ready ~~~execute-1 oc get vmrestores ~~~ +Which should now show our latest one as "**complete=true**": + ~~~bash NAME TARGETKIND TARGETNAME COMPLETE RESTORETIME ERROR mongodb-nationalparks-snap0-restore-pb4mbl VirtualMachine mongodb-nationalparks true 1h @@ -313,6 +315,8 @@ mongodb-nationalparks-vmrestore1 VirtualMachine mongodb-nationalpa ```execute oc describe vmsnapshot mongodb-nationalparks-snap1 ``` +Which should then show: + ~~~bash Name: mongodb-nationalparks-vmrestore1 Namespace: backup-test @@ -347,7 +351,9 @@ Status: Restores: Data Volume Name: restore-805d3352-6c72-468d-b8c6-36f083d2d68e-mongodb-nationalparks Persistent Volume Claim: restore-805d3352-6c72-468d-b8c6-36f083d2d68e-mongodb-nationalparks - Volume Name: mongodb-nationalparks + Volume Name: mongod + +b-nationalparks Volume Snapshot Name: vmsnapshot-7a46dfc9-9904-42e9-a0a3-c02ef43d0f2b-volume-mongodb-nationalparks Events: Type Reason Age From Message @@ -363,8 +369,6 @@ oc describe vmsnapshot mongodb-nationalparks-snap1 | grep true ``` -You can now start your VM back up. From the console go to **Actions** → **Start Virtual Machine** to power the VM on. - -Once the VM is powered on and boots successfully, you can refresh `ParksMap` [web page](http://parksmap-parksmap-demo.apps.hgdmbmc.dynamic.opentlc.com/). It should successfully load national parks locations from the backend service and start displaying them on the map again. +You can now start your VM back up. From the console go to **Actions** → **Start Virtual Machine** to power the VM on, and you will have likely noticed that the root disk has been automatically reattached. Once the VM is powered on and boots successfully, you can refresh `ParksMap` [web page](http://parksmap-%parksmap-project-namespace%.%cluster_subdomain%), again after a few minutes of boot-up time it should successfully load national parks locations from the backend service and start displaying them on the map again. -That's it for taking vm snapshots and performing restores - we've created snapshots of our mongodb database vm using both OpenShift web console and CLI, and restored it after deleting data files and underlying vm disk. +That's it for taking VM snapshots and performing restores - we've created snapshots of our MongoDB database VM using both the OpenShift web console and CLI, and restored it after both deleting data files on the root disk and removing the underlying VM disk entirely. Select "**Hot-plug**" below to continue. diff --git a/files/lab/workshop/content/templates.md b/files/lab/workshop/content/templates.md index 4085119..772d660 100644 --- a/files/lab/workshop/content/templates.md +++ b/files/lab/workshop/content/templates.md @@ -4,7 +4,7 @@ Virtual machines consist of a virtual machine definition and one or more disks t Every virtual machine template requires a **boot source**, which is a fully configured virtual machine disk image including configured drivers. Each virtual machine template contains a virtual machine definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source. -The namespace `openshift-virtualization-os-images` enables the feature and is installed with the OpenShift Virtualization Operator. Once the boot source feature is installed, you can create boot sources, attach them to templates, and create virtual machines from the templates. +The namespace `openshift-virtualization-os-images` houses these templates and is installed with the OpenShift Virtualization Operator. Once the boot source feature is installed, you can create boot sources, attach them to templates, and create virtual machines from the templates. Boot sources are defined by using a persistent volume claim (PVC) that is populated by uploading a local file, cloning an existing PVC, importing from a registry, or by URL. Attach a boot source to a virtual machine template by using the web console. After the boot source is attached to a virtual machine template, you create any number of fully configured ready-to-use virtual machines from the template. @@ -32,7 +32,7 @@ The new PVC will then be set as the boot source of the selected CentOS 8 templat 1. In the OpenShift Virtualization console, click **Workloads** → **Virtualization** from the side menu. 2. Click the **Templates** tab. -3. Identify the `CentOS 8.0+ VM` template to configure its boot source and click **Add source**. +3. Identify the `CentOS 8.0+ VM` template, and under the "**Boot source**" item select "**Add source**" to configure its boot source and click **Add source**. 4. In the **Add boot source to template window**, select **Import via URL (creates PVC)** from the **Boot source type** drop down. 5. Input `http://%bastion-host%:81/rhel8-kvm.img` as the URL of the guest image into the **Import URL** field. @@ -57,18 +57,14 @@ You can also view the import progress by listing the data volumes in the `opensh oc get datavolumes -n openshift-virtualization-os-images ``` +Which should show the following: + ~~~bash NAME PHASE PROGRESS RESTARTS AGE -centos7 Succeeded 100.0% 253d centos8 ImportInProgress 2.00% 7m28s -fedora Succeeded 100.0% 253d -rhcos-490 Succeeded 100.0% 1 22d -rhel7 Succeeded 100.0% 1 253d -rhel8 Succeeded 100.0% 255d -win10 Succeeded 100.0% 253d ~~~ -Once the import progress is reached up to 100% and succeeded, you can verify that a boot source was added to the template: +Once the import progress is reached up to 100% and succeeded (you can keep re-running the previous command to check on progress), you can verify that a boot source was added to the template: 1. In the OpenShift Virtualization console, click **Workloads** → **Virtualization** from the side menu. @@ -80,4 +76,4 @@ You can now use this template to create CentOS 8 virtual machines. ![Templates](img/templates-boot-source-verify.png) -That's it for adding boot sources to a template. We have imported a Centos 8 cloud disk image into a new PVC and attached that onto a CentOS 8 virtual machine template which we will use to create new virtual machines in the next labs. +That's it for adding boot sources to a template. We have imported a CentOS 8 cloud disk image into a new PVC and attached that onto a CentOS 8 virtual machine template which we will use to create new virtual machines in the next labs. Let's continue on by selecting the "**Parksmap Application**" button below. diff --git a/files/lab/workshop/modules.yaml b/files/lab/workshop/modules.yaml index 50e1229..bbde89c 100644 --- a/files/lab/workshop/modules.yaml +++ b/files/lab/workshop/modules.yaml @@ -66,7 +66,7 @@ modules: exit_sign: Deploy second DB deploy-database-yaml: name: Deploy Database VM using YAML - exit_sign: Network Isolation + exit_sign: Backup and Restore micro-segmentation: name: Network Isolation for Virtual Machines exit_sign: Lab Complete!