Skip to content
This repository has been archived by the owner on Feb 8, 2024. It is now read-only.

Manual Setup

Yashodhan Pise edited this page May 15, 2021 · 39 revisions

Target Audience

This page is for those who are seeing directions to the inner functioning of the Provisioner. On this page an attempt has been made to provide a bare-bones approach to setting-up Provisioner.

If you are in hurry consider following instructions at: QuickStart-Guide


Prerequisites

Setup Steps

  1. Point to the supported YUM repos.
    Ref: https://github.com/Seagate/cortx-prvsnr/tree/dev/files/etc/yum.repos.d
    NOTE: Any repos not mentioned at above location should be removed.
  2. Clean yum database
    yum clean all  
    rm -rf /var/cache/yum  
    
    Note: You might be required to downgrade a few packages (like glibc and dependencies), in case there are setup issues.
  3. Install and setup Git on the target node [For Dev setup]
    $ yum install -y git
  4. Update hostname (if necessary):
    $ hostnamectl set-hostname <hostname>
  5. Install Provisioner CLI rpm (cortx-prvsnr-cli) from the release repo on all nodes to be provisioned (This rpm is required to setup passwordless ssh communication across nodes. Can be neglected for single node setup):
    yum install -y yum-utils  
    yum-config-manager --add-repo "http://<cortx_release_repo>/releases/cortx/github/master/rhel-7.7.1908/last_successful/"  
    yum install -y cortx-prvsnr-cli --nogpgcheck
    rm -rf /etc/yum.repos.d/cortx-storage*
    
  6. Modify contents of file ~/.ssh/config on primary node as suggested below:
    Host srvnode-1 <node-1 hostname> <node-1 fqdn>
        HostName <node-1 hostname or mgmt IP>
        User root
        UserKnownHostsFile /dev/null
        StrictHostKeyChecking no
        IdentityFile ~/.ssh/id_rsa_prvsnr
        IdentitiesOnly yes
    
    Host srvnode-2 <node-2 hostname> <node-2 fqdn>
        HostName <node-2 hostname or mgmt IP>
        User root
        UserKnownHostsFile /dev/null
        StrictHostKeyChecking no
        IdentityFile ~/.ssh/id_rsa_prvsnr
        IdentitiesOnly yes
    
    Copy /root/.ssh/config to other nodes
  7. Install Provisioner rpm (cortx-prvsnr) from the cortx release repo:
    yum install -y yum-utils  
    yum-config-manager --add-repo "http://<cortx_release_repo>/releases/cortx/github/master/rhel-7.7.1908/last_successful/"  
    yum install -y cortx-prvsnr --nogpgcheck
    rm -rf /etc/yum.repos.d/cortx-storage*
    
    NOTE: replace rpm with appropriate rpm file in above command.
  8. Install SaltStack:
    $ yum install -y salt-master salt-minion
  9. Copy Salt config files:
$ cp /opt/seagate/cortx/provisioner/files/etc/salt/master /etc/salt/master
$ cp /opt/seagate/cortx/provisioner/files/etc/salt/minion /etc/salt/minion
  1. Setup minion_id
    $ vim /etc/salt/minion_id
    NOTE: Minion id for first node is srvnode-1. For subsequent nodes it would be srvnode-n, where n is the node count.
    E.g. srvnode-2 for second node and so on.

  2. Set salt-master fqdn in /etc/salt/minion

    # Set the location of the salt master server. If the master server cannot be
    # resolved, then the minion will fail to start.
    master: srvnode-1                   # <== Change this value to match salt-master fqdn
    
  3. Restart Salt Minion:
    $ systemctl restart salt-minion
    $ systemctl restart salt-master

  4. Register node into salt-master
    $ salt-key -L
    $ salt-key -A -y

  5. Rescan SAS HBA (For HW node with attached storage enclosure):
    $ yum install sg3_utils -y
    $ rescan-scsi-bus.sh

  6. Install multipath and configure (For HW node with attached storage enclosure):
    On each node

    1. $ yum install -y device-mapper-multipath
    2. $ mpathconf --enable
    3. $ systemctl stop multipathd
    4. Update /etc/multipath.conf with
    defaults {
        polling_interval 10
        max_fds 8192
        user_friendly_names yes
        find_multipaths yes
    }
    
    devices {
        device {
            vendor "SEAGATE"
            product "*"
            path_grouping_policy group_by_prio
            uid_attribute "ID_SERIAL"
            prio alua
            path_selector "service-time 0"
            path_checker tur
            hardware_handler "1 alua"
            failback immediate
            rr_weight uniform
            rr_min_io_rq 1
            no_path_retry 18
        }
    }
    blacklist {
    }
    
    1. $ systemctl start multipathd
    2. $ multipath -ll|grep -B2 prio=50|grep mpath|sort -k2.2
  7. Identify and register list of storage SCSI devices with Provisioner (For HW node with attached storage enclosure):

    1. Select device list for srvnode-1 (use command below to identify the nodes):
      $ multipath -ll|grep mpath|sort -k2.2
      Sample:
      storage:
        metadata_devices:              # Device for /var/mero and possibly SWAP
          - /dev/disk/by-id/dm-name-mpathaa
        data_devices:                 # Data device/LUN from storage enclosure
          - /dev/disk/by-id/dm-name-mpathab
          - /dev/disk/by-id/dm-name-mpathac
          - /dev/disk/by-id/dm-name-mpathad
          - /dev/disk/by-id/dm-name-mpathae
          - /dev/disk/by-id/dm-name-mpathaf
          - /dev/disk/by-id/dm-name-mpathag
          - /dev/disk/by-id/dm-name-mpathah
      
    2. Repeat for other nodes
    3. Update network interfaces, netmask and gateway under section
      network:
        mgmt:                  # Management network interfaces
          interface:
            - eno1
            - eno2
          ipaddr: 
          netmask: 
        data:                  # Data network interfaces
          interface: 
            - enp175s0f0
            - enp175s0f0
          ipaddr: 172.19.10.101
          netmask: 255.255.255.0
        gateway_ip: 10.230.160.1              # Gateway IP of network
      
      If you find bond0 already configured, just update the interfaces as below
      network:
        mgmt:                  # Management network interfaces
          interface:
            - eno1
          ipaddr: 
          netmask: 
        data:                  # Data network interfaces
          interface: 
            - bond0
          ipaddr:
          netmask: 255.255.255.0
        gateway_ip: 10.230.160.1              # Gateway IP of network
      
      Update both sections for dual cluster
    4. Update /opt/seagate/cortx/provisioner/pillar/components/cluster.sls
      cluster:  
          cluster_ip: 172.19.200.100         <------------ Update with static ip for public data network provided by infra team  
          type: dual                           # single/dual/3_node/generic  
          mgmt_vip:                          <------------ Update with static ip for public network provided by infra team 
          node_list:  
              - srvnode-1  
              - srvnode-2  
          srvnode-1:  
            hostname: sm10-r20.pun.seagate.com  # setup-provisioner fills this
            is_primary: true  
            bmc:
                ip: <BMC_IP>              <--- Autoupdates, so change only if requried
                user: <BMC_User>          <--- Update with BMC User if required
                secret: <BMC_Secret>      <--- Update with BMC Password if required
            network:  
                pvt_nw_addr: 192.168.0.0  # Do not change 
                # Parameter is used to configure management network interface if no DHCP is set up.
                nw_search: pun.seagate.com  <----- Default for Pune lab. Change if needed. For LCO lab use colo.seagate.com.
                mgmt:                  # Management network interfaces  
                    interface:  
                    - eno1              <--------- Provide interface identified for mgmt network.  
                    ipaddr:               <----------- Can be left blank if it's DHCP
                    netmask: 255.255.0.0  <----------- Can be updated depending on IP address.
                data:                  # Data network interfaces  
                    interface:   
                    - enp175s0f0         <--------------- first data network interface name  
                    - enp175s0f1         <--------------- second data network interface name
                    ipaddr: 172.19.20.10   <----------- if DHCP is not enabled, put in the public data nw static IP received from infra team
                    netmask: 255.255.0.0   <----------- Can be updated depending on IP address.
                    roaming_ip:               # Keep blank will be populated by provisioner  
                    gateway_ip: null         # Gateway IP of network, leave it default value
      
    5. Update /opt/seagate/cortx/provisioner/pillar/components/storage.sls
      storage:
        enclosure-1:        <-------- ID for the enclosure 
          type: RBOD                      # RBOD/JBOD/Virtual/Other            # equivalent to fqdn for server node
          controller:
            primary:
              ip: 10.0.0.2    <-------- ip address of controller A (if without in-band)
              port: 80
            secondary:
              ip: 10.0.0.3    <-------- ip address of controller B (if without in-band)
              port: 80
            user: manage          <-------- Controller access user
            secret: '!passwd'     <-------- Controller access secret
      
    6. Update /opt/seagate/cortx/provisioner/pillar/components/release.sls
    7. Update pillar data
      salt "*" saltutil.refresh_pillar

Use Salt to Provision CORTX Components

  1. Setup System
    $ salt '*' state.apply components.system

  2. Setup Storage
    $ salt '*' state.apply components.system.storage

  3. Setup Network (If bond0 absent)
    $ salt '*' state.apply components.system.network

  4. Setup 3rd party components

    1. Build SSL certs rpm package for S3server
      $ salt '*' state.apply components.misc_pkgs.build_ssl_cert_rpms
    2. Setup Corosync-Pacemaker
      $ salt 'srvnode-2' state.apply components.ha.corosync-pacemaker
      $ salt 'srvnode-1' state.apply components.ha.corosync-pacemaker
    3. Setup Rsyslog
      $ salt '*' state.apply components.misc_pkgs.rsyslog
    4. Setup ElasticSearch
      $ salt '*' state.apply components.misc_pkgs.elasticsearch
    5. Setup HAProxy
      $ salt '*' state.apply components.ha.haproxy
    6. Setup OpenLDAP
      $ salt '*' state.apply components.misc_pkgs.openldap
      1. Setup Consul
      $ salt '*' state.apply components.misc_pkgs.consul
    7. Setup Kibana
      $ salt '*' state.apply components.misc_pkgs.kibana
    8. Setup node.js
      $ salt '*' state.apply components.misc_pkgs.nodejs
    9. Setup RabbitMQ
      $ salt 'srvnode-1' state.apply components.misc_pkgs.rabbitmq
      $ salt 'srvnode-2' state.apply components.misc_pkgs.rabbitmq
    10. Setup statsd
      $ salt '*' state.apply components.misc_pkgs.statsd
  5. Setup IO path components

    1. Setup Lustre Client
      $ salt '*' state.apply components.misc_pkgs.lustre
    2. Setup CORTX Core
      $ salt '*' state.apply components.motr
    3. Setup S3Server
      $ salt '*' state.apply components.s3server
    4. Setup Hare
      $ salt 'srvnode-2' state.apply components.hare
      $ salt 'srvnode-1' state.apply components.hare
      $ salt '*' state.apply components.ha.iostack-ha
    5. Check cluster status
    $ pcs status   
    
  6. Setup Management Stack

    1. Setup SSPL
      $ salt '*' state.apply components.sspl
    2. Setup CSM
      $ salt '*' state.apply components.csm
    3. Setup UDS
      $ salt '*' state.apply components.uds
    4. Add SSPL, CSM & UDS to HA
      $ salt '*' state.apply components.post_setup

The setup is now ready.

Clone this wiki locally