Skip to content

Commit

Permalink
Latest pushes prior to event
Browse files Browse the repository at this point in the history
  • Loading branch information
rdoxenham committed Mar 10, 2022
1 parent 721a8f3 commit f49b1fa
Show file tree
Hide file tree
Showing 13 changed files with 264 additions and 350 deletions.
4 changes: 2 additions & 2 deletions files/lab/deploy/parksmap-application-template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -361,7 +361,7 @@ objects:
role: frontend
spec:
containers:
- image: quay.io/erkanercan/parksmap:latest
- image: quay.io/openshiftroadshow/parksmap:latest
imagePullPolicy: Always
name: "${PM_APPLICATION_NAME}"
ports:
Expand Down Expand Up @@ -433,4 +433,4 @@ objects:
name: "${PM_APPLICATION_NAME}"
weight: 100
port:
targetPort: 8080-tcp
targetPort: 8080-tcp
36 changes: 19 additions & 17 deletions files/lab/workshop/content/cloning.md
Original file line number Diff line number Diff line change
Expand Up @@ -336,7 +336,7 @@ cat >> /etc/systemd/system/nginx.service << EOF
Description=Nginx Podman container
Wants=syslog.service
[Service]
ExecStart=/usr/bin/podman run --net=host docker.io/nginxdemos/hello:plain-text
ExecStart=/usr/bin/podman run --net=host quay.io/roxenham/nginxdemos:plain-text
ExecStop=/usr/bin/podman stop --all
[Install]
WantedBy=multi-user.target
Expand Down Expand Up @@ -393,12 +393,12 @@ Let's quickly verify that this works as expected. You should be able to navigate


```copy
curl http://192.168.123.69
curl http://192.168.123.65
```

~~~bash
$ curl http://192.168.123.69
Server address: 192.168.123.69:80
$ curl http://192.168.123.65
Server address: 192.168.123.65:80
Server name: fedora
Date: 25/Nov/2021:15:09:21 +0000
URI: /
Expand Down Expand Up @@ -568,35 +568,35 @@ fc34-clone 84s Running True
fc34-original 76m Stopped False
~~~

This machine should also get an IP address after a few minutes - it won't be the same as the original VM as the clone was given a new MAC address:
This machine should also get an IP address after a few minutes - it won't be the same as the original VM as the clone was given a new MAC address, you may need to be patient here until it shows you the IP address of the new VM:

```execute-1
oc get vmi
```

In our example, this IP is "*192.168.123.70*":
In our example, this IP is "*192.168.123.66*":

~~~bash
NAME AGE PHASE IP NODENAME READY
fc34-clone 88s Running 192.168.123.70 ocp4-worker2.aio.example.com True
fc34-clone 88s Running 192.168.123.66 ocp4-worker2.aio.example.com True
~~~

> **Note** Give the command 2-3 minutes to report the IP.
This machine will also be visible from the OpenShift Virtualization console. You can login using "**root/redhat**" if you want to try:
This machine will also be visible from the OpenShift Virtualization console, which you can navigate to using the top "**Console**" button, or by using your dedicated tab if you've created one. You can login using "**root/redhat**", by going into the "**Workloads**" --> "**Virtualization**" --> "**fc34-clone**" --> "**Console**", if you want to try:

<img src="img/fc34-clone-console.png"/>

### Test the clone

Like before, we should be able to just directly connect to the VM on port 80 via `curl` and view our simple NGINX based application responding. Let's try it! Remember to use to the IP address from yoir environment:
Like before, we should be able to just directly connect to the VM on port 80 via `curl` and view our simple NGINX based application responding. Let's try it! Remember to use to the IP address from **your** environment as the example below may be different:

~~~copy
$ curl http://192.168.123.70
$ curl http://192.168.123.66
~~~

Which should show similar to the following, if our clone was successful:

~~~bash
Server address: 192.168.123.70:80
Server address: 192.168.123.66:80
Server name: fedora
Date: 25/Nov/2021:15:58:20 +0000
URI: /
Expand Down Expand Up @@ -652,17 +652,19 @@ Here our running VM is showing with our new IP address, in the example case it's

~~~bash
NAME AGE PHASE IP NODENAME READY
fc34-original-clone 89s Running 192.168.123.71 ocp4-worker3.aio.example.com True
fc34-original-clone 89s Running 192.168.123.66 ocp4-worker3.aio.example.com True
~~~

Like before, we should be able to confirm that it really is our clone:

~~~bash
$ curl http://192.168.123.71
$ curl http://192.168.123.66
~~~

Which should show something similar to this:

~~~bash
Server address: 192.168.123.71:80
Server address: 192.168.123.66:80
Server name: fedora
Date: 25/Nov/2021:16:26:05 +0000
URI: /
Expand All @@ -682,4 +684,4 @@ virtualmachine.kubevirt.io "fc34-original" deleted
virtualmachine.kubevirt.io "fc34-original-clone" deleted
~~~

Choose "Masquerade Networking" to continue with the lab.
Choose "**Masquerade Networking**" to continue with the lab.
54 changes: 20 additions & 34 deletions files/lab/workshop/content/deploy-application-components.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
In this lab, we will use OpenShift Web Console to deploy the frontend and backend components of the ParksMap application.
Parksmap application consists of one frontend web application, two backend applications and 2 databases.
In this lab, we will use the OpenShift Web Console to deploy the frontend and backend components of the ParksMap application, which comprises of one frontend web application, two backend applications and 2 databases:

- ParksMap frontend web application, also called `parksmap`, and uses OpenShift's service discovery mechanism to discover the backend services deployed and shows their data on the map.

- Nationalparks backend application queries for national parks information (including their
coordinates) that is stored in a MongoDB database.
- NationalParks backend application queries for national parks information (including their coordinates) that are stored in a MongoDB database.

- MLBParks backend application queries Major League Baseball stadiums in the US that are stored in an another MongoDB database.

Expand All @@ -17,31 +15,32 @@ Parksmap frontend and backend components are shown in the diagram below:

### 1. Creating the Project

As a first step, we need to create a project where Parksmap application will be deployed.
You can create the project with the following command:
As a first step, we need to create a project where ParksMap application will be deployed. You can create the project with the following command:

```execute
oc new-project %parksmap-project-namespace%
```

### 2. Grant Service Account View Permissions

The parksmap frontend application continously monitors the **routes** of the backend applications. This requires granting additional permissions to access OpenShift API to learn about other **Pods**, **Services**, and **Route** within the **Project**.
The ParksMap frontend application continously monitors the **routes** of the backend applications. This requires granting additional permissions to access OpenShift API to learn about other **Pods**, **Services**, and **Route** within the **Project**.


```execute
oc policy add-role-to-user view -z default
```

The *oc policy* command above is giving a defined _role_ (*view*) to the default user so that applications in current project can access OpenShift API.
You should see the following output:

### 3. Login to OpenShift Web Console
~~~bash
clusterrole.rbac.authorization.k8s.io/view added: "default"
~~~

We will use OpenShift Web Console to deploy Parksmap Web Application components.
The *oc policy* command above is giving a defined _role_ (*view*) to the default user so that applications in current project can access OpenShift API.

Please go to the [Web Console](http://console-openshift-console.%cluster_subdomain%/k8s/cluster/projects) outside the lab environment login as the kubeadmin user with the credentials you retrived previously.
### 3. Navigate to the OpenShift Web Console

> **NOTE:** As mentioned, since we require the kubeadmin user for these labs all steps need to be completed in the web console outside the lab environment.
Select the blue "**Console**" button at the top of the window to follow the steps below in the OpenShift web console as part of this lab guide.

### 4. Search for the Application Template

Expand All @@ -51,7 +50,7 @@ If you are in the in the Administrator perspective, switch to Developer perspect

![parksmap-developer-persepctive](img/parksmap-developer-persepctive.png)

From the menu, select the `+Add` panel. Find the parksmap project and select it:
From the menu, select the `+Add` panel. Find the **parksmap-demo** project and select it (if you're not asked to choose a project, it's probably because you've already selected one, simply go to the "**Project**" drop down at the top and select "**All Projects**" to continue):

![parksmap-choose-project](img/parksmap-choose-project.png)

Expand All @@ -63,13 +62,9 @@ You will see a screen where you have multiple options to deploy applications to

<br/>

We will be using `Templates` to deploy the application components. A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform.

A template can be processed to create anything you have permission to create within a project, for example services, build configurations, and deployment configurations. A template can also define a set of labels to apply to every object defined in the template.

You can create a list of objects from a template using the CLI or, if a template has been uploaded to your project or the global template library, using the web console.
We will be using `Templates` to deploy the application components. A template describes a set of objects that can be parameterised and processed to produce a list of objects for creation by OpenShift Container Platform. A template can be processed to create anything you have permission to create within a project, for example services, build configurations, and deployment configurations. A template can also define a set of labels to apply to every object defined in the template.

In the `Search` text box, enter *parksmap* to find the application template.
You can create a list of objects from a template using the CLI or, if a template has been uploaded to your project or the global template library, using the web console. In the `Search` text box, enter *parksmap* to find the application template that we've already pre-loaded for you:

<br/>

Expand All @@ -87,7 +82,7 @@ Then click on the `Parksmap` template to open the popup menu and then click on t

<br/>

This will open a dialog that will allow you to configure the template. This template allows you to configure the following parameters:
This will open a dialog that will *allow* you to configure the template. This template allows you to configure the following parameters:

- Parksmap Web Application Name
- Mlbparks Application Name
Expand All @@ -100,17 +95,12 @@ This will open a dialog that will allow you to configure the template. This temp

<br/>

Next click the blue *Create* button without changing default parameters. You will be directed to the *Topology* page, where you should see the visualization for the `parksmap` deployment config in the `workshop` application.
OpenShift now creates all the Kubernetes resources to deploy the application, including *Deployment*, *Service*, and *Route*.
Next click the blue *Create* button **without changing default parameters**. You will be directed to the *Topology* page, where you should see the visualization for the `parksmap` deployment config in the `workshop` application. OpenShift now creates all the Kubernetes resources to deploy the application, including *Deployment*, *Service*, and *Route*.


### 6. Check the Application

These few steps are the only ones you need to run to all 3 application components of `parksmap` on OpenShift.

It will take the `parksmap` application a little while to complete.

Each OpenShift node that is asked to run the images of applications has to pull (download) it, if the node does not already have it cached locally. You can check on the status of the image download and deployment in the *Pod* details page, or from the command line with the `oc get pods` command to check the readiness of pods or you can monitor it from the Developer Console.
These few steps are the only ones you need to run to all 3 application components of `parksmap` on OpenShift. It will take a little while for the the `parksmap` application deployment to complete. Each OpenShift node that is asked to run the images of applications has to pull (download) it, if the node does not already have it cached locally. You can check on the status of the image download and deployment in the *Pod* details page, or from the command line with the `oc get pods` command to check the readiness of pods or you can monitor it from the Developer Console.

Your screen will end up looking something like this:
<br/>
Expand All @@ -119,7 +109,7 @@ Your screen will end up looking something like this:

<br/>

This is the *Topology* page, where you should see the visualization for the `parksmap` ,`nationalparks` and `mlbparks` deployments in the `workshop` application.
This is the *Topology* page, where you should see the visualisation for the `parksmap` ,`nationalparks` and `mlbparks` deployments in the `workshop` application.


### 7. Access the Application
Expand All @@ -132,11 +122,7 @@ If you click on the `parksmap` entry in the Topology view, you will see some inf

<br/>

On the "Resources" tab, you will see that there is a single *Route* which allows external access to the `parksmap` application. While the *Services* panel provide internal abstraction and load balancing information within the OpenShift environment.

The way that external clients are able to access applications running in OpenShift is through the OpenShift routing layer. And the data object behind that is a *Route*.

Also note that there is a decorator icon on the `parksmap` visualization now. If you click that, it will open the URL for your *Route* in a browser:
On the "**Resources**" tab, you will see that there is a single *Route* which allows external access to the `parksmap` application. While the *Services* panel provide internal abstraction and load balancing information within the OpenShift environment. The way that external clients are able to access applications running in OpenShift is through the OpenShift routing layer. And the data object behind that is a *Route*. Also note that there is a decorator icon on the `parksmap` visualisation now. If you click that, it will open the URL for your *Route* in a browser:

![parksmap-decorator](img/parksmap-decorator.png)

Expand All @@ -148,7 +134,7 @@ This application is now available at the URL shown in the Developer Perspective.

<br/>

You can notice that `parksmap` application does not show any parks as we haven't deployed database servers for the backends yet.
You can notice that `parksmap` application does not show any parks as we haven't deployed database servers for the backends yet. We'll do that in the next step, select "**Deploy first DB**" below to continue.



Loading

0 comments on commit f49b1fa

Please sign in to comment.