Skip to content
This repository has been archived by the owner on Feb 8, 2024. It is now read-only.

Provisioner Setup

mazinamdar edited this page Feb 10, 2021 · 6 revisions

Provisioner Setup - Provisioner CLI Commands for Single node VM provisioner setup

Before You Start

Checklist:

  • Do you see the devices on execution of this command: lsblk ?
  • Do both the systems on your setup have valid hostnames, are the hostnames accessible: ping ?
  • Do you have IPs' assigned to all NICs eth0, eth1 and eth2?
  • Identify primary node and run below commands on primary node
    NOTE: For single-node VM, the VM node itself is treated as primary node.

VM Preparation for Deployment

  1. Set root user password on all nodes:

    sudo passwd root
    
  2. SSH Connectivity Check

    ssh root@node exit
    
  3. Storage Configuration Check The VM should have exactly 2 attached disks

    lsblk -d|grep -E 'sdb|sdc'|wc -l
    
  4. Install provisioner api

    Production Environment

    1. Set repository URL
      CORTX_RELEASE_REPO="<URL to Cortx R1 stack release repo>"
      
    2. Install Provisioner API and requisite packages
      yum install -y yum-utils
      yum-config-manager --add-repo "${CORTX_RELEASE_REPO}/3rd_party/"
      yum install --nogpgcheck -y python3 python36-m2crypto salt salt-master salt-minion
      rm -rf /etc/yum.repos.d/*3rd_party*.repo
      yum-config-manager --add-repo "${CORTX_RELEASE_REPO}/cortx_iso/"
      yum install --nogpgcheck -y python36-cortx-prvsnr
      rm -rf /etc/yum.repos.d/*cortx_iso*.repo
      yum clean all
      rm -rf /var/cache/yum/
      
      # pip3 dependencies
      pip3 install jsonschema==3.0.2 requests
      
  5. Verify provisioner version (0.36.0 and above)

    provisioner --version
    
  6. Create config.ini file to some location:
    IMPORTANT NOTE: Please check every details in this file correctly according to your node.
    Verify interface names are correct as per your node

    Update required details in ~/config.ini use sample config.ini

    vi ~/config.ini
    

    Sample config.ini for single node VM

    [storage]
    type=other
    
    [srvnode-1]
    hostname=host1.localdomain
    network.data.private_ip=None
    network.data.public_interfaces=eth1,eth2
    network.data.private_interfaces=eth3,eth4
    network.mgmt.interfaces=eth0
    bmc.user=None
    bmc.secret=None
    

    Sample config.ini for dual node VM

    [storage]
    type=other
    
    [srvnode-1]
    hostname=host1.localdomain
    network.data.private_ip=None
    network.data.public_interfaces=eth1,eth2
    network.data.private_interfaces=eth3,eth4
    network.mgmt.interfaces=eth0
    bmc.user=None
    bmc.secret=None
    
    [srvnode-2]
    hostname=host2.localdomain
    network.data.private_ip=None
    network.data.public_interfaces=eth1,eth2
    network.data.private_interfaces=eth3,eth4
    network.mgmt.interfaces=eth0
    bmc.secret=None
    bmc.user=None
    
    [srvnode-3]
    hostname=host3.localdomain
    network.data.private_ip=None
    network.data.public_interfaces=eth1,eth2
    network.data.private_interfaces=eth3,eth4
    network.mgmt.interfaces=eth0
    bmc.secret=None
    bmc.user=None
    

    NOTE : private_ip, bmc_secret, bmc_user should be None for VM.

Bootstrap Provisioner

  1. Run provisioner_setup cli command:
    provisioner setup_provisioner srvnode-1:$(hostname -f) --logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini --dist-type bundle --target-build ${CORTX_RELEASE_REPO} --pypi-repo
    
  2. Update pillar data
    provisioner configure_setup /root/config.ini 1
    
  3. Encrypt all passwords
    salt-call state.apply components.system.config.pillar_encrypt
    
  4. Export pillar data in json
    provisioner pillar_export
    

Bootstrap Validation

Once provisioner setup is done, verify salt master setup on nodes (setup verification checklist)

salt '*' test.ping  
salt "*" service.stop puppet
salt "*" service.disable puppet
salt '*' pillar.get release  
salt '*' grains.get node_id  
salt '*' grains.get cluster_id  
salt '*' grains.get roles  

System States Deployment

Deploy system related components

provisioner deploy_vm --states system --setup-type single

Prereq/3rd party States Deployment

Deploy all 3rd party components

provisioner deploy_vm --states prereq --setup-type single

NOTE :

  1. target-build should be link to base url for hosted 3rd_party and cortx_iso repos

  2. For --target_build use builds from below url based on OS:
    centos-7.8.2003: <build_url>/centos-7.8.2003/ OR Contact Cortx RE team for latest URL.

  3. This command will ask for each node's root password during initial cluster setup.
    This is one time activity required to setup password-less ssh across nodes.

  4. For setting up a cluster of more than 3 nodes do append --name <setup_profile_name> to auto_deploy_vm command input parameters.

Deploy VM Manually:

Manual deployment of VM consists of following steps from Auto-Deploy, which could be individually executed:
NOTE: Ensure VM Preparation for Deployment has been addressed successfully before proceeding

Bootstrap VM(s): Run setup_provisioner provisioner cli command:

Single Node VM: Bootstrap

If using remote hosted repos:

provisioner setup_provisioner srvnode-1:$(hostname -f) \
--logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini \
--dist-type bundle --target-build ${CORTX_RELEASE_REPO}
provisioner configure_setup ./config.ini 1
provisioner pillar_export

Multi Node VM: Bootstrap

If using remote hosted repos:

provisioner setup_provisioner --console-formatter full --logfile \
    --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm \
    --config-path ~/config.ini --ha \
    --dist-type bundle \
    --target-build ${CORTX_RELEASE_REPO} \
    srvnode-1:<fqdn:primary_hostname> \
    srvnode-2:<fqdn:secondary_hostname> \
    srvnode-3:<fqdn:secondary_hostname>

Example:

provisioner setup_provisioner \
--logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini \
--ha --dist-type bundle --target-build ${CORTX_RELEASE_REPO} \
srvnode-1:host1.localdomain srvnode-2:host2.localdomain srvnode-3:host3.localdomain

NOTE :

  1. target-build should be link to base url for hosted 3rd_party and cortx_iso repos

  2. For --target_build use builds from below url based on OS:
    centos-7.8.2003: <build_url>/centos-7.8.2003/ OR Contact Cortx RE team for latest URL.

  3. This command will ask for each node's root password during initial cluster setup.
    This is one time activity required to setup passwordless ssh across nodes.

  4. For setting up a cluster of more than 3 nodes do append --name <setup_profile_name> to auto_deploy_vm command input parameters.

VM Teardown

Teardown Provisioner Deployed Environment

Execute script

/opt/seagate/cortx/provisioner/cli/destroy-vm --ctrlpath-states --iopath-states --prereq-states --system-states

Teardown Provisioner Bootstrapped Environment

To teardown the bootstrap step:

  1. Unmount gluster volumes
    umount $(mount -l | grep gluster | cut -d ' ' -f3)
    
  2. Stop services
    systemctl stop glustersharedstorage glusterfsd glusterd
    systemctl stop salt-minion salt-master
    
  3. Uninstall the rpms
    yum erase -y cortx-prvsnr cortx-prvsnr-cli      # Cortx Provisioner packages
    yum erase -y gluster-fuse gluster-server        # Gluster FS packages
    yum erase -y salt-minion salt-master salt-api   # Salt packages
    yum erase -y python36-m2crypto                  # Salt dependency
    yum erase -y python36-cortx-prvsnr              # Cortx Provisioner API packages
    yum autoremove -y
    yum clean all
    rm -rf /var/cache/yum
    # Remove cortx-py-utils
    pip3 uninstall -y cortx-py-utils
    # Cleanup pip packages
    pip3 freeze|xargs pip3 uninstall -y
    
  4. Cleanup bricks and other directories
    # Cortx software dirs
    rm -rf /opt/seagate/cortx
    rm -rf /opt/seagate/cortx_configs
    rm -rf /opt/seagate
    # Bricks cleanup
    test -e /var/lib/seagate && rm -rf /var/lib/seagate
    test -e /srv/glusterfs && rm -rf /srv/glusterfs
    test -e /var/cache/salt && rm -rf /var/cache/salt
    # Cleanup Salt
    rm -rf /var/cache/salt
    rm -rf /etc/salt
    # Cleanup Provisioner profile directory
    rm -rf /root/.provisioner
    
  5. Cleanup SSH
    rm -rf /root/.ssh
    

Known issues:

  1. Known Issue 19: Known Issue 19: LVM issue - auto-deploy fails during provisioning of storage component (EOS-12289)
Clone this wiki locally