This repository has been archived by the owner on Feb 8, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 40
Software Upgrade Manual
Yashodhan Pise edited this page Jan 20, 2021
·
4 revisions
- List the existing rpms
$ (rpm -qa|grep cortx) |tee before_update.txt log4cxx_cortx-0.10.0-1.x86_64 log4cxx_cortx-devel-0.10.0-1.x86_64 cortx-libsspl_sec-1.0.0-28_git38d9759.el7.x86_64 cortx-csm_web-1.0.0-37_011bd7c.x86_64 cortx-hare-1.0.0-56_gita354d38.el7.x86_64 cortx-py-utils-1.0.0-49_73e45fa.noarch cortx-sspl-1.0.0-28_git38d9759.el7.noarch cortx-sspl-test-1.0.0-28_git38d9759.el7.noarch python36-cortx-prvsnr-0.41.0-83_git8bb88f6b.x86_64 cortx-motr-1.0.0-69_git6d5a19b_3.10.0_1127.el7.x86_64 cortx-s3server-1.0.0-65_gitc1a578f4_el7.x86_64 cortx-ha-1.0.0-33_6f50fb9.x86_64 cortx-csm_agent-1.0.0-51_5ba15bb9.x86_64 cortx-libsspl_sec-method_none-1.0.0-28_git38d9759.el7.x86_64 cortx-prvsnr-1.0.0-83_git8bb88f6b_el7.x86_64 cortx-s3iamcli-1.0.0-65_gitc1a578f4.noarch
- Capture yum checkpoint
$ yum history |tee yum_before_update.txt
- Check existing release data
$ salt-call pillar.items release|tee release_pillar_before_update.txt local: ---------- release: ---------- target_build: file:///var/lib/seagate/cortx/provisioner/local/cortx_repos/cortx_single_iso type: bundle update: ---------- base_dir: /opt/seagate/cortx/updates repos: ----------
- Suspend the cluster to maintenance state
$ hctl node maintenance --all
- Ensure cluster is offline
$ pcs status
- Create directory for ISO
$ salt "*" cmd.run "mkdir -p /opt/isos"
- Download the iso for upgrade to
/opt/isos
- Copy ISO to update location
$ cp /opt/isos/<update_iso>.iso /var/lib/seagate/cortx/provisioner/shared/srv/salt/misc_pkgs/swupdate/repo/files/<update_iso>.iso
- Update release pillar for new update iso
$ provisioner pillar_set release/update/repos/<update_iso_version> \"iso\"
- Mount the ISO on both nodes
$ salt "*" state.apply components.misc_pkgs.swupdate.repo
- Append 3rd-party repo with
[3rd_party_update] baseurl=file:///opt/seagate/cortx/updates/<update_iso>/3rd_party gpgcheck=0 name=Repository 3rd_party updates enabled=1
- Append base url in /etc/yum.repos.d/sw_update_1.0.0-531.repo with
/cortx_iso
[sw_update_1.0.0] baseurl=file:///opt/seagate/cortx/updates/<update_iso>/cortx_iso gpgcheck=0 name=Cortx Update repo enabled=1
- Copy these file to both nodes
$ scp -r /opt/iso/* srvnode-2:/opt/iso/ $ scp -r /etc/yum.repos.d/*.repo srvnode-2:/etc/yum.repos.d/
Manually run the following salt states
-
Provisioner updgrade
salt "*" state.apply components.provisioner.update salt "*" state.sls_id salt_master_config_updated components.provisioner.salt_master.config salt "*" state.sls_id salt_minion_config_updated components.provisioner.salt_minion.config systemctl restart salt-minion salt-master ssh srvnode-2 "systemctl restart salt-minion salt-master" python3 -c "from provisioner import salt_minion; salt_minion.ensure_salt_minions_are_ready(['srvnode-1', 'srvnode-2'])"
-
Upgrade 3rd Party Software
salt "*" state.apply components.ha.haproxy.install salt "*" state.apply components.ha.haproxy.config
NOTE: This is a special case for rabbitmq-server. erlang-examples doesn't upgrade from R16B-03.18.el7 to 23.1.1-1.el7
salt "*" pkg.purge rabbitmq-server salt "*" pkg.purge erlang salt "*" cmd.run "yum -y autoremove" salt "*" state.apply components.misc_pkgs.rabbitmq.install salt "*" state.apply components.misc_pkgs.rabbitmq.config salt "srvnode-2" state.apply components.sync.software.rabbitmq salt "srvnode-1" state.apply components.sync.software.rabbitmq
-
IOStack upgrades
salt "*" state.apply components.misc_pkgs.lustre.install salt "*" state.apply components.misc_pkgs.lustre.config salt "*" state.apply components.motr.install salt "*" state.apply components.motr.config salt "*" state.apply components.hare.install salt "*" state.apply components.s3server.install
-
ControlStack upgrades
salt "*" state.apply components.sspl.install salt "*" state.apply components.uds.install salt "*" state.apply components.uds.config salt "*" state.apply components.csm.install salt "srvnode-2" state.apply components.csm.config salt "srvnode-1" state.apply components.csm.config
- Get cluster out of maintenance mode
hctl node unmaintenance --all
- Update SSPL and generate healthmap
salt "*" state.apply components.sspl.config salt "srvnode-2" state.apply components.sspl.health_view salt "srvnode-1" state.apply components.sspl.health_view /opt/seagate/cortx/ha/conf/script/build-ha-update /var/lib/hare/cluster.yaml /opt/seagate/cortx/iostack-ha/conf/build-ha-args.yaml /opt/seagate/cortx/ha/conf/build-ha-csm-args.yaml
- Refresh cluster
pcs resource refresh --force --all pcs resource cleanup --all pcs status
- Create new backup
provisioner deploy --states backup