Skip to content

Latest commit

 

History

History
299 lines (229 loc) · 8.92 KB

Preparing_your_workstation.adoc

File metadata and controls

299 lines (229 loc) · 8.92 KB

Preparing your workstation

Red Hat subscriptions

  • Some deployments would require a Red Hat Customer Portal account that has appropriate subscriptions. This is not required for the playbook themselves.

Note
Red Hat employee subscriptions can be used

AgnosticD with Execution Environment

It is possible to run agnosticd with containers. This way you just need a container runtime, usually podman for Linux and Docker for mac. The tools/execution_environments directory contains builds scripts for our supported Execution Environment (EE).

Workstation setup for OpenStack

Please see the OpenStack tutorial.

Workstation setup for AWS

Prerequisites

This is a configuration guide of the packages and repos you need to install the tools, but a previoulsy existing AWS account is needed.

Steps

  • Python

  • Python Boto version 2.41 or greater

  • Git any version would do.

  • Ansible version 2.1.2 or greater

  • awscli bundle tested with version 1.11.32 Python and the Python dependencies may be installed via your OS' package manager (eg: python2-boto on Fedora/CentOS/RHEL) or via pip. Python virtualenv can also work.

Note
on Fedora, all dependencies are packaged and can be easily installed via dnf install wget git awscli python3-boto3 ansible ansible-lint yamllint (botocore and python will be pulled automatically through dependencies). The lint tools are optional but are recommended tools to check the quality of your code.
Example script to install required software
# Install basic packages
yum install -y  wget python python-boto unzip python2-boto3.noarch tmux git ansible

# Another option to configure python boto is:
git clone git://github.com/boto/boto.git
cd boto
python setup.py install

#Install boto3
pip install boto3

#Install pywinrm if you plan to deploy windows VMs
#pip install pywinrm

# Enable epel repositories for Ansible
cd /tmp
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum -y install `ls *epel*.rpm`

# Install ansible and checked install version (required 2.2.0.0)
yum install -y ansible
ansible --version


## Install aws cli
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /bin/aws
aws --version
Mac OS installation steps:
# Install Python3
brew install python

# For python2 do
brew install python@2

# Depending on whether you did python3 or python2, use the pip3 or pip command
pip3 install boto3

#Install pywinrm if you plan to deploy windows VMs
#pip3 install pywinrm

# Install Ansible
pip3 install ansible

# Install awscli
brew install awscli

Configure the EC2 Credentials

In order to be able to deploy AgnosticD configs on AWS, you will need an IAM user into your AWS account. If you are new to AWS, you can find the instructions.

  • You will need to place your EC2 credentials in the ~/.aws/credentials file:

mkdir ~/.aws
cat << EOF >>  ~/.aws/credentials
[default]
aws_access_key_id = CHANGE_ME
aws_secret_access_key = CHANGE_ME

EOF
  • Add the SSH Key to the SSH Agent (optional) If your operating system has an SSH agent and you are not using your default configured SSH key, you will need to add the private key you use with your EC2 instances to your SSH agent:

    ssh-add <path to key file>
Note
If you use an SSH config that specifies what keys to use for what hosts this step may not be necessary.

AWS Permissions and Policies

AWS credentials for the account above must be used with the AWS command line tool (detailed below)

  • An AWS IAM account with the following permissions:

    • Policies can be defined for Users, Groups or Roles

    • Navigate to: AWS Dashboard → Identity & Access Management → Select Users or Groups or Roles → Permissions → Inline Policies → Create Policy → Custom Policy

    • Policy Name: openshift (your preference)

    • Policy Document:

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "Stmt1459269951000",
                  "Effect": "Allow",
                  "Action": [
                      "cloudformation:*",
                      "iam:*",
                      "route53:*",
                      "elasticloadbalancing:*",
                      "ec2:*",
                      "cloudwatch:*",
                      "autoscaling:*",
                      "s3:*"
                  ],
                  "Resource": [
                      "*"
                  ]
              }
          ]
      }
Note
Finer-grained permissions are possible, and pull requests are welcome.

AWS existing resources

  • A route53 public hosted zone is required for the scripts to create the various DNS entries for the resources it creates. Two DNS entries will be created for workshops:

    • master.guid.domain.tld - a DNS entry pointing to the master

    • *.cloudapps.guid.domain.tld - a wildcard DNS entry pointing to the router/infrastructure node

  • An EC2 SSH keypair should be created in advance and you should save the key file to your system. To do so, follow these steps:

    REGION=us-west-1
    KEYNAME=ocpworkshop
    $ openssl genrsa -out ~/.ssh/${KEYNAME}.pem 2048
    $ openssl rsa -in ~/.ssh/${KEYNAME}.pem -pubout > ~/.ssh/${KEYNAME}.pub
    $ chmod 400 ~/.ssh/${KEYNAME}.pub
    $ chmod 400 ~/.ssh/${KEYNAME}.pem
    $ touch ~/.ssh/config
    $ chmod 600 ~/.ssh/config

Now, test connecting to your AWS account with your previously created credentials and your key:

+

---
$ aws ec2 import-key-pair --key-name ${KEYNAME} --region=$REGION --output=text --public-key-material "`cat ~/.ssh/${KEYNAME}.pub | grep -v PUBLIC`"
----

+ CAUTION: Key pairs are created per region, you will need to specify a different keypair for each region or duplicate the keypair into every region.

+

REGIONS="ap-southeast-1 ap-southeast-2 OTHER_REGIONS..."
$ for REGION in `echo ${REGIONS}` ;
  do
    aws ec2 import-key-pair --key-name ${KEYNAME} --region=$REGION --output=text --public-key-material "`cat ~/.ssh/${KEYNAME}.pub | grep -v PUBLIC`"
  done

Workstation setup for Azure

Prerequisites

This is a configuration guide of the packages and repos you need to install the tools, but a previoulsy existing Azure account is needed.

Steps

If you want to deploy on azure you will need the Azure client.

in a nutshell (tested on fedora 28) - Azure cli (system-wide)
# Install the azure-cli system-wide
sudo -i
rpm --import https://packages.microsoft.com/keys/microsoft.asc
cat >> /etc/yum.repos.d/azure-cli.repo <<EOF
[azure-cli]
name=Azure CLI
baseurl=https://packages.microsoft.com/yumrepos/azure-cli
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc
EOF

yum check-update
yum install -y azure-cli

We recommend you install the ansible module in a virtualenv.

in a nutshell (tested on fedora 28) - Azure ansible module (use virtualenv)
# /!\ careful this will update ansible as well
# Use a virtualenv for those:
pip install --upgrade pip
pip install --upgrade --force ansible[azure]
Note
--force is used here, because of a known issue.

Service principal

It’s better to use a service principal instead of your main credentials. Refer to the official documentation.

in a nutshell
az login
az ad sp create-for-rbac
az login --service-principal -u <user> -p <password-or-cert> --tenant <tenant>
env_secret_vars.yml
azure_service_principal: "service principal client id"
azure_password: "service principal password or cert"
azure_tenant: "tenant ID"
azure_region: "Azure location, ex: EuropeWest"
azure_subscription_id: "Subscription id"

Virtualenv

If you want to use virtualenv, you can try & adapt this:

cd ansible
mkdir ~/virtualenv-aad
virtualenv ~/virtualenv-aad -p python2.7
. ~/virtualenv-aad/bin/activate
export CC=gcc-5
pip install -r requirements.txt