-
Notifications
You must be signed in to change notification settings - Fork 40
Manual Setup
This page is for those who are seeing directions to the inner functioning of the Provisioner. On this page an attempt has been made to provide a bare-bones approach to setting-up Provisioner.
If you are in hurry consider following instructions at: QuickStart-Guide
- Point to the supported YUM repos.
Ref: https://github.com/Seagate/cortx-prvsnr/tree/dev/files/etc/yum.repos.d
NOTE: Any repos not mentioned at above location should be removed. - Clean yum database
Note: You might be required to downgrade a few packages (like glibc and dependencies), in case there are setup issues.
yum clean all rm -rf /var/cache/yum
- Install and setup Git on the target node [For Dev setup]
$yum install -y git
- Update hostname (if necessary):
$hostnamectl set-hostname <hostname>
- Install Provisioner CLI rpm (cortx-prvsnr-cli) from the release repo on all nodes to be provisioned (This rpm is required to setup passwordless
ssh
communication across nodes. Can be neglected for single node setup):yum install -y yum-utils yum-config-manager --add-repo "http://<cortx_release_repo>/releases/cortx/github/master/rhel-7.7.1908/last_successful/" yum install -y cortx-prvsnr-cli --nogpgcheck rm -rf /etc/yum.repos.d/cortx-storage*
- Modify contents of file
~/.ssh/config
on primary node as suggested below:CopyHost srvnode-1 <node-1 hostname> <node-1 fqdn> HostName <node-1 hostname or mgmt IP> User root UserKnownHostsFile /dev/null StrictHostKeyChecking no IdentityFile ~/.ssh/id_rsa_prvsnr IdentitiesOnly yes Host srvnode-2 <node-2 hostname> <node-2 fqdn> HostName <node-2 hostname or mgmt IP> User root UserKnownHostsFile /dev/null StrictHostKeyChecking no IdentityFile ~/.ssh/id_rsa_prvsnr IdentitiesOnly yes
/root/.ssh/config
to other nodes - Install Provisioner rpm (cortx-prvsnr) from the cortx release repo:
NOTE: replace rpm with appropriate rpm file in above command.
yum install -y yum-utils yum-config-manager --add-repo "http://<cortx_release_repo>/releases/cortx/github/master/rhel-7.7.1908/last_successful/" yum install -y cortx-prvsnr --nogpgcheck rm -rf /etc/yum.repos.d/cortx-storage*
- Install SaltStack:
$yum install -y salt-master salt-minion
- Copy Salt config files:
$ cp /opt/seagate/cortx/provisioner/files/etc/salt/master /etc/salt/master
$ cp /opt/seagate/cortx/provisioner/files/etc/salt/minion /etc/salt/minion
-
Setup minion_id
$vim /etc/salt/minion_id
NOTE: Minion id for first node issrvnode-1
. For subsequent nodes it would besrvnode-n
, where n is the node count.
E.g. srvnode-2 for second node and so on. -
Set salt-master fqdn in
/etc/salt/minion
# Set the location of the salt master server. If the master server cannot be # resolved, then the minion will fail to start. master: srvnode-1 # <== Change this value to match salt-master fqdn
-
Restart Salt Minion:
$systemctl restart salt-minion
$systemctl restart salt-master
-
Register node into salt-master
$salt-key -L
$salt-key -A -y
-
Rescan SAS HBA (For HW node with attached storage enclosure):
$yum install sg3_utils -y
$rescan-scsi-bus.sh
-
Install multipath and configure (For HW node with attached storage enclosure):
On each node- $
yum install -y device-mapper-multipath
- $
mpathconf --enable
- $
systemctl stop multipathd
- Update
/etc/multipath.conf
with
defaults { polling_interval 10 max_fds 8192 user_friendly_names yes find_multipaths yes } devices { device { vendor "SEAGATE" product "*" path_grouping_policy group_by_prio uid_attribute "ID_SERIAL" prio alua path_selector "service-time 0" path_checker tur hardware_handler "1 alua" failback immediate rr_weight uniform rr_min_io_rq 1 no_path_retry 18 } } blacklist { }
- $
systemctl start multipathd
- $
multipath -ll|grep -B2 prio=50|grep mpath|sort -k2.2
- $
-
Identify and register list of storage SCSI devices with Provisioner (For HW node with attached storage enclosure):
- Select device list for srvnode-1 (use command below to identify the nodes):
$multipath -ll|grep mpath|sort -k2.2
Sample:storage: metadata_devices: # Device for /var/mero and possibly SWAP - /dev/disk/by-id/dm-name-mpathaa data_devices: # Data device/LUN from storage enclosure - /dev/disk/by-id/dm-name-mpathab - /dev/disk/by-id/dm-name-mpathac - /dev/disk/by-id/dm-name-mpathad - /dev/disk/by-id/dm-name-mpathae - /dev/disk/by-id/dm-name-mpathaf - /dev/disk/by-id/dm-name-mpathag - /dev/disk/by-id/dm-name-mpathah
- Repeat for other nodes
- Update network interfaces, netmask and gateway under section
If you find bond0 already configured, just update the interfaces as below
network: mgmt: # Management network interfaces interface: - eno1 - eno2 ipaddr: netmask: data: # Data network interfaces interface: - enp175s0f0 - enp175s0f0 ipaddr: 172.19.10.101 netmask: 255.255.255.0 gateway_ip: 10.230.160.1 # Gateway IP of network
Update both sections for dual clusternetwork: mgmt: # Management network interfaces interface: - eno1 ipaddr: netmask: data: # Data network interfaces interface: - bond0 ipaddr: netmask: 255.255.255.0 gateway_ip: 10.230.160.1 # Gateway IP of network
- Update /opt/seagate/cortx/provisioner/pillar/components/cluster.sls
cluster: cluster_ip: 172.19.200.100 <------------ Update with static ip for public data network provided by infra team type: dual # single/dual/3_node/generic mgmt_vip: <------------ Update with static ip for public network provided by infra team node_list: - srvnode-1 - srvnode-2 srvnode-1: hostname: sm10-r20.pun.seagate.com # setup-provisioner fills this is_primary: true bmc: ip: <BMC_IP> <--- Autoupdates, so change only if requried user: <BMC_User> <--- Update with BMC User if required secret: <BMC_Secret> <--- Update with BMC Password if required network: pvt_nw_addr: 192.168.0.0 # Do not change # Parameter is used to configure management network interface if no DHCP is set up. nw_search: pun.seagate.com <----- Default for Pune lab. Change if needed. For LCO lab use colo.seagate.com. mgmt: # Management network interfaces interface: - eno1 <--------- Provide interface identified for mgmt network. ipaddr: <----------- Can be left blank if it's DHCP netmask: 255.255.0.0 <----------- Can be updated depending on IP address. data: # Data network interfaces interface: - enp175s0f0 <--------------- first data network interface name - enp175s0f1 <--------------- second data network interface name ipaddr: 172.19.20.10 <----------- if DHCP is not enabled, put in the public data nw static IP received from infra team netmask: 255.255.0.0 <----------- Can be updated depending on IP address. roaming_ip: # Keep blank will be populated by provisioner gateway_ip: null # Gateway IP of network, leave it default value
- Update /opt/seagate/cortx/provisioner/pillar/components/storage.sls
storage: enclosure-1: <-------- ID for the enclosure type: RBOD # RBOD/JBOD/Virtual/Other # equivalent to fqdn for server node controller: primary: ip: 10.0.0.2 <-------- ip address of controller A (if without in-band) port: 80 secondary: ip: 10.0.0.3 <-------- ip address of controller B (if without in-band) port: 80 user: manage <-------- Controller access user secret: '!passwd' <-------- Controller access secret
- Update /opt/seagate/cortx/provisioner/pillar/components/release.sls
- Update pillar data
salt "*" saltutil.refresh_pillar
- Select device list for srvnode-1 (use command below to identify the nodes):
-
Setup System
$salt '*' state.apply components.system
-
Setup Storage
$salt '*' state.apply components.system.storage
-
Setup Network (If bond0 absent)
$salt '*' state.apply components.system.network
-
Setup 3rd party components
- Build SSL certs rpm package for S3server
$salt '*' state.apply components.misc_pkgs.build_ssl_cert_rpms
- Setup Corosync-Pacemaker
$salt 'srvnode-2' state.apply components.ha.corosync-pacemaker
$salt 'srvnode-1' state.apply components.ha.corosync-pacemaker
- Setup Rsyslog
$salt '*' state.apply components.misc_pkgs.rsyslog
- Setup ElasticSearch
$salt '*' state.apply components.misc_pkgs.elasticsearch
- Setup HAProxy
$salt '*' state.apply components.ha.haproxy
- Setup OpenLDAP
$salt '*' state.apply components.misc_pkgs.openldap
1. Setup Consul
$salt '*' state.apply components.misc_pkgs.consul
- Setup Kibana
$salt '*' state.apply components.misc_pkgs.kibana
- Setup node.js
$salt '*' state.apply components.misc_pkgs.nodejs
- Setup RabbitMQ
$salt 'srvnode-1' state.apply components.misc_pkgs.rabbitmq
$salt 'srvnode-2' state.apply components.misc_pkgs.rabbitmq
- Setup statsd
$salt '*' state.apply components.misc_pkgs.statsd
- Build SSL certs rpm package for S3server
-
Setup IO path components
- Setup Lustre Client
$salt '*' state.apply components.misc_pkgs.lustre
- Setup CORTX Core
$salt '*' state.apply components.motr
- Setup S3Server
$salt '*' state.apply components.s3server
- Setup Hare
$salt 'srvnode-2' state.apply components.hare
$salt 'srvnode-1' state.apply components.hare
$salt '*' state.apply components.ha.iostack-ha
- Check cluster status
$ pcs status
- Setup Lustre Client
-
Setup Management Stack
- Setup SSPL
$salt '*' state.apply components.sspl
- Setup CSM
$salt '*' state.apply components.csm
- Setup UDS
$salt '*' state.apply components.uds
- Add SSPL, CSM & UDS to HA
$salt '*' state.apply components.post_setup
- Setup SSPL
The setup is now ready.