The lab setup is using the Edge Management Ansible Collection to deploy and configure the environment, so before running the lab deployment you will need to:
- Prepare your devices/VMs and Laptop
- Prepare the Edge Management Ansible Collection prerequisites
- Prepare the demo specific prerequisites
After the deployment you will have the following running sevices at the Edge Management node that you will use in your demo:
-
Ansible Automation Platform Controller: 8080 (HTTP) / 8443 (HTTPS)
-
Ansible Automation Platform Event-Driven Ansible Controller: 8082 (HTTP) / 8445 (HTTPS)
-
Cockpit: 9090
-
Gitea: 3000
Note
As part of this lab we are not deploying any secret manager service integraged with Ansible Automation Platform, so you will find some variables containing passwords in plain text in several Jobs, but it is important to mention that in production that won't be the case
In order to deploy/prepare the lab you will only the Edge Management node, the Edge Device won't be used/deployed until you run the demo steps.
Remember that there are two devices/VMs involved in the demo:
-
Edge Management node: I've been able to deploy everything on a VM with 4 vCores and 10GB of memory. Storage will depend on the number of RHDE images that you generate. If you want to be sure, give it 150GB (it won't use all that space probably) The Edge Management node will nee to have a RHEL 9.x installed (this lab has been tested with RHEL 9.3), "minimal install" is enough. You will need to either have a passwordless sudo user in that system or include the sudo password in the Ansible inventory.
Note
Remember that, as part of this demo, a Terraform script is provided to create, install RHEL and perform the required config in that VM. This is not required to deploy the lab but it will simplify it in case you want to directly run this server in AWS.
-
Edge Device: This will depend on what you install on top, but for the base deployment you can use 1.5 vCores, 3GB of memory and 50GB disk.
Your laptop will need Ansible installed to run the playbooks contained in the Edge Management Ansible Collection (see next section). You will also need git
to clone the repo in the next step and, if using VMs, a virtualization hypervisor (libvirt
and Virtual Machine Manager are recommended).
Clone the this repo and move your CLI prompt to the ansible
directory on the path where the actual demo is located. The demo directory should have a similar organization as the one shown below, you will need to move inside the ansible
directory which will contain, among others, the inventory, playbooks and vars used for the demo.
├── terraform
...
├── ansible
│ ├── inventory
│ ├── playbooks
│ │ ├── main.yml
│ ├── templates
...
├── docs
...
└── README.md
When you find a reference to a path during this lab deploymend guide it will consider that you CLi is under the ansible
directory, so playbooks
will be in fact <your cloned demo directory/ansible/playbooks>
.
Note
You might find that you don't have the vars/secrets.yaml file since that file is created as part of the prerequisites.
An optional Terraform script is provided to simplify the creation of the Edge Management server in AWS.
First, you will need to install Terraform in your laptop.
It also has some prerequisites if you want to use it:
-
You will need to Install Terraform in your laptop
-
Prepare your AWS credentials in
~/.aws/credentials
[default]
aws_access_key_id = your_access_key_id
aws_secret_access_key = your_secret_access_key
Note
If you are a Red Hatter you could order an AWS Blank environment in demo.redhat.com in order to get a valid AWS access key and secret
- Prepare Terraform variables in file
../terraform/rhel_vm.tfvars
You need to install the Ansible Collection on your laptop:
ansible-galaxy collection install luisarizmendi.rh_edge_mgmt --upgrade
Note
Even if you have already installed the collection, it is a good idea to run the command above so the collection playbooks are updated if there has been any change since you downloaded it for the first time.
The Collection setup_rh_edge_mgmt_node role and config_rh_edge_mgmt_node role have some pre-requisites. This is the summary (all for installing the services):
- Ansible Automation Platform Manifest file
- Red Hat Customer Portal Offline Token
- Red Hat Pull Secret
- Red Hat User and Password
In order to use Automation controller you need to have a valid subscription via a manifest.zip
file. To retrieve your manifest.zip file you need to download it from access.redhat.com.
You have the steps in the Ansible Platform Documentation
-
Go to Subscription Allocation and click "New Subscription Allocation"
-
Enter a name for the allocation and select
Satellite 6.8
as "Type". -
Add the subscription entitlements needed (click the tab and click "Add Subscriptions") where Ansible Automation Platform is available.
-
Go back to "Details" tab and click "Export Manifest"
Create a files
directory under ansible
and save apart your manifest.zip
file in that files
directory (a different location can be configured with the manifest_file
variable).
Note
If you want to check the contents of the ZIP file you will see a
consumer_export.zip
file and asignature
inside.
If you use the default path you should have the manifest.zip
file in this path:
├── terraform
...
├── ansible
│ ├── files
│ └── manifest.zip
│ ├── inventory
│ ├── playbooks
...
│ ├── templates
...
├── docs
...
└── README.md
This token is used to authenticate to the customer portal and download software. It is needed to deploy the Ansible Automation Platform server and in order to download the standard RHEL ISO.
It can be generated here.
Note
Remember that the Offline tokens will expire after 30 days of inactivity. If your offline Token is not valid, you won't be able to download the
aap.tar.gz
.
Take note of the token, you will use it when creating the Vault Secrets file.
This Pull Secret will be needed to pull the container images used by Microshift
from the Red Hat's container repository. It is needed to deploy the Ansible Automation Platform server.
Get your pull secret from the Red Hat Console
Take note of the pull-secret, you will use it when creating the Vault Secrets file.
Instead of passing your secrets in plain text, it's better that you create a vault secret file:
mkdir vars
ansible-vault create vars/secrets.yml
Note
Remember the password that you used to encrypt the file, since it will be needed to access the contents
You will need to include the information that you gather during the previous steps (Offline Token and Pull Secret) and also your Red Hat Account user name and password:
pull_secret: '<your pull secret>'
offline_token: '<your offline token>'
red_hat_user: <your RHN user>
red_hat_password: <your RHN password>
If you use the default path you should have the secrets.yml
file in this path:
├── terraform
...
├── ansible
│ ├── files
...
│ ├── inventory
│ ├── playbooks
...
│ ├── templates
...
│ └── vars
│ └── secrets.yml
├── docs
...
└── README.md
Prepare the Ansible inventory file
---
all:
hosts:
edge_management:
ansible_host: <your edge manager server ip>
ansible_port: 22
ansible_user: <sudoer user - default is admin>
Also prepare the variables in the playbooks/main.yml
playbook, you may want to change:
- System architecture in
system_arch
(eitherx86_64
oraarch64
). Remember thataarch64
is under testing. - Microshift release in
microshift_release
- Edge Management server sudo user and passwords in
image_builder_admin_name
andimage_builder_admin_password
. This is the user withsudo
privileges that you created in the RHEL server where you installed the Image Builder - You will also need to include your container repository in
apps_registry
(see next point).
---
- name: RHDE and AAP Demo
hosts:
- edge_management
tasks:
- name: Install management node
ansible.builtin.include_role:
name: luisarizmendi.rh_edge_mgmt.setup_rh_edge_mgmt_node
vars:
### COLLECTION VARS
system_arch: "x86_64"
microshift: true
microshift_release: 4.15
- name: Config management node
ansible.builtin.include_role:
name: luisarizmendi.rh_edge_mgmt.config_rh_edge_mgmt_node
vars:
### COLLECTION VARS
system_arch: "x86_64"
image_builder_admin_name: admin
image_builder_admin_password: R3dh4t1!
image_builder_custom_rpm_files: ../templates/custom-rpms
gitea_admin_repos_template: ../templates/gitea_admin_repos
gitea_user_repos_template: ../templates/gitea_user_repos
aap_config_template: ../templates/aap_config.j2
aap_repo_name: aap
### DEMO SPECIFIC VARS
apps_registry: quay.io/luisarizmendi
Note
If you are using the directory tree of this example you could keep the variables that you find there (
gitea_admin_repos_template
,aap_config_template
, ...)
So far you prepared the prerequisites of any demo/lab deployed with the Edge Management Ansible Collection, but this demo also has some specific requirements that are mentioned below.
During the demo there are some optional steps where you will need to push or move tags in certain container images (take a look at minute 25:25 in the video), so you will need to have access to a container image repository (the one that you configured in the apps_registry
variable in the playbooks/main.yml
playbook).
Note
If you don't want to show those demo steps, you can keep
apps_registry: quay.io/luisarizmendi
and the applications will be deployed, although you won't be able to alter the container tags in the registry...
Probably you want to use Quay.io so first, check that you can login:
podman login -u <your-quay-user> quay.io
Once you have access to the registry, copy the container images that we will be using (those are public in my Quay.io user luisarizmendi
). You can pull them to your laptop and then push it to your registry, or you can just use skopeo
:
skopeo copy docker://quay.io/luisarizmendi/2048:v1 docker://quay.io/<your-quay-user>/2048:v1
skopeo copy docker://quay.io/luisarizmendi/2048:v2 docker://quay.io/<your-quay-user>/2048:v2
skopeo copy docker://quay.io/luisarizmendi/2048:v3 docker://quay.io/<your-quay-user>/2048:v3
skopeo copy docker://quay.io/luisarizmendi/2048:prod docker://quay.io/<your-quay-user>/2048:prod
skopeo copy docker://quay.io/luisarizmendi/simple-http:v1 docker://quay.io/<your-quay-user>/simple-http:v1
skopeo copy docker://quay.io/luisarizmendi/simple-http:v2 docker://quay.io/<your-quay-user>/simple-http:v2
skopeo copy docker://quay.io/luisarizmendi/simple-http:prod docker://quay.io/<your-quay-user>/simple-http:prod
Note
The container images are multiarchitecture, so you will be able to use it no matter if your system is
aarch64
orx86_64
Remember to change visibility of both 2048 and simple-http images to "public" in each "Repository Settings"
The demo makes use of Microshift, which needs OCP and Fast-Datapath repositories enabled. Your subscription must have them available to be used.
Note
The deployment will take long, expect something like 60-70 minutes depending on the number of configured users, VM/device resources and network connectivity
If you want to use the provided terraform script to create the server in AWS, you will need to move one level up in the directory and:
- Use the right Terraform variable file (either x86_64 or aach64) by copy the specific file to
terraform/rhel_vm.tfvars
, for example:
cd ..
cp terraform/rhel_vm.tfvars.x86_64 terraform/rhel_vm.tfvars
Note
Take a look to the Terraform variables, you might want to change the region too.
- Run:
./create.sh
First, be sure that you have the latest version of the collection:
ansible-galaxy collection install luisarizmendi.rh_edge_mgmt --upgrade
Note
Even if you have already installed the collection, it is a good idea to run the command above so the collection playbooks are updated if there has been any change since you downloaded it for the first time.
Once you have all the pre-requisites ready, including the Ansible Vault secret file, you need to run the main playbook including the Vault password by adding the --ask-vault-pass
option:
ansible-playbook -vvi inventory --ask-vault-pass playbooks/main.yml
These pre-flight checks should be performed just right after the deployment. You can also use them to double-check that everything is ok before your demo...
- Check the access to following services:
- Ansible Automation Platform Controller: https://:8443
- Ansible Automation Platform Event-Driven Ansible Controller: https://:8445
- Cockpit: https://:9090
- Gitea: http://:3000
- Check container images in your registry (Quay in our example):
Go to quay.io
in the 2024 repository and check that the "prod" tag is pointing to "v1" .If not just create a new Tag "prod" by pressing the gearwheel on the "v1" label (at the right).
You should also check that the image in the rhde/<environment>/rhde_config/apps/microshift/manifest/2048/app_2048-microshift-2-deploy.yml
file on Gitea is v1
and not v3
.
If this environment was never used probably it will be correctly assigned but if you already ran the demo the "prod" tag will be probably pointing to "v3".
Sometimes it could happen that you don't have the 120 minutes to run the demo. One way to reduce the time is by creating the OS images in advance instead of running the build during the demo.
The demo will need to create at least three OS images (take a look at minute 49:37 in the video):
- The first one used to show how to onboard the device
- The upgraded image without some of the required packages by Greenboot
- The upgraded image but including the required packages
By default, when you "publish" an image the last one that you created is the one that is used. That behaviour can be changed by changing the value latest
to the version that you want to publish in the device-edge-images/prod-image-deploy.yml
file located Gitea, so you could change that to 0.0.1
, then create the first image with the provided blueprint (just create, you don't need to publish until you run the demo), then create the second and third images using the v2 and v3 blueprints.
Then, during the demo, you can just use the "Publish" task. After showing the onboarding, change the device-edge-images/prod-image-deploy.yml
file to version 0.0.2 in order to publish the second image (that was already created in the pre-demo steps) and then do the same with the third image.
Enjoy the lab!