Skip to content

Commit

Permalink
Merge branch 'OndrejHome:master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
iqfx authored Dec 13, 2022
2 parents 08415ba + e83ef87 commit 8bc41af
Show file tree
Hide file tree
Showing 30 changed files with 327 additions and 66 deletions.
10 changes: 7 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
ha-cluster-pacemaker
=========

Role for configuring and expanding basic pacemaker cluster on CentOS/RHEL 6/7/8, AlmaLinux 8, Fedora 31/32/33 and CentOS 8 Stream systems.
Role for configuring and expanding basic pacemaker cluster on CentOS/RHEL 6/7/8/9, AlmaLinux 8/9, Rocky Linux 8, Fedora 31/32/33/34/35/36 and CentOS 8 Stream systems.

This role can configure following aspects of pacemaker cluster:
- enable needed system repositories
Expand Down Expand Up @@ -42,10 +42,14 @@ This role depend on role [ondrejhome.pcs-modules-2](https://github.com/OndrejHom

**CentOS 8 Stream** Tested with version 20201211 minimal usable ansible version is **2.9.16/2.10.4**. Version **2.8.18** was **not** working at time of testing. This is related to [Service is in unknown state #71528](https://github.com/ansible/ansible/issues/71528).

**Debian Buster** Tested with version 20210310 with ansible version **2.10**. Debian version does not include the stonith configuration and the firewall configuration. **Note:** This role went only through limited testing on Debian - not all features of this role were tested.
**Debian Buster** Tested with version 20210310 with ansible version **2.10** and **Debian Bullseye** Tested with version 20220326 with ansible version **2.12**. Debian part of this role does not include the stonith configuration and the firewall configuration. **Note:** This role went only through limited testing on Debian - not all features of this role were tested.

Ansible version **2.9.10** and **2.9.11** will fail with error `"'hostvars' is undefined"` when trying to configure remote nodes. This applies only when there is at least one node with `cluster_node_is_remote=True`. **Avoid these Ansible versions** if you plan to configure remote nodes with this role.

On **CentOS Linux 8** you have to ensure that BaseOS and Appstream repositories are working properly. As the CentOS Linux 8 is in the End-Of-Life phase, this role will configure HA repository to point to vault.centos.org if repository configuration (`enable_repos: true`) is requested (it is by default).

**pcs-0.11** version distributions (AlmaLinux 9, RHEL 9, Fedora 36) are supported only with ondrejhome.pcs-modules-2 version 27.0.0 or higher.

Role Variables
--------------

Expand Down Expand Up @@ -131,7 +135,7 @@ Role Variables
cluster_configure_stonith_style: 'one-device-per-node'
```
- (RHEL/CentOS/AlmaLinux) enable the repositories containing needed packages
- (RHEL/CentOS/AlmaLinux/Rocky) enable the repositories containing needed packages
```
enable_repos: true
```
Expand Down
25 changes: 13 additions & 12 deletions defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,23 +32,24 @@ cluster_configure_fence_kdump: false
# You must provide IP/hostname of vCenter/hypervisor and username/password that is able to start/stop VMs for this cluster
cluster_configure_fence_vmware_soap: false
cluster_configure_fence_vmware_rest: false
#fence_vmware_ipaddr: ''
#fence_vmware_login: ''
#fence_vmware_passwd: ''
# fence_vmware_ipaddr: ''
# fence_vmware_login: ''
# fence_vmware_passwd: ''

# by default we use encrypted configuration (ssl=1) without validating certificates (ssl_insecure=1)
fence_vmware_options: 'ssl="1" ssl_insecure="1"'
# NOTE: Only one of fence_vmware_soap/fence_vmware_rest can be configured as stonith devices share same name.

# custom fence device configuration variable which allows you to define your own fence devices
# for proper options check examples below
#
#cluster_fence_config:
# fence_device_1:
# fence_type: 'fence_vmware_soap'
# fence_options: 'pcmk_host_map="fastvm-1:vm_name_on_hypevisor1" ipaddr="vcenter.hostname" login="root" passwd="testest" ssl="1" ssl_insecure="1" op monitor interval=30s'
# fence_device_2:
# fence_type: 'fence_xvm'
# fence_options: 'pcmk_host_map="fastvm-2:vm_name_n_hypervisor2" op monitor interval=30s'
# cluster_fence_config:
# fence_device_1:
# fence_type: 'fence_vmware_soap'
# fence_options: 'pcmk_host_map="fastvm-1:vm_name_on_hypevisor1" ipaddr="vcenter.hostname" login="root" passwd="testest" ssl="1" ssl_insecure="1" op monitor interval=30s'
# fence_device_2:
# fence_type: 'fence_xvm'
# fence_options: 'pcmk_host_map="fastvm-2:vm_name_n_hypervisor2" op monitor interval=30s'

# How to map fence devices to cluster nodes?
# by default for every cluster node a separate stonith devices is created ('one-device-per-node').
Expand Down Expand Up @@ -98,8 +99,8 @@ allow_cluster_expansion: false
# from interface `ens8` use `cluster_net_iface: 'ens8'`. Interface must exists on all cluster nodes.
cluster_net_iface: ''

#Redundant network interface. If specified the role will setup a corosync redundant ring using the default IPv4 from this interface.
#Interface must exist on all cluster nodes.
# Redundant network interface. If specified the role will setup a corosync redundant ring using the default IPv4 from this interface.
# Interface must exist on all cluster nodes.
rrp_interface: ''

# Whether to add hosts to /etc/hosts.
Expand Down
37 changes: 21 additions & 16 deletions meta/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,21 +5,26 @@ galaxy_info:
license: GPLv3
min_ansible_version: 2.8
platforms:
- name: EL
versions:
- 6
- 7
- 8
- name: Fedora
versions:
- 31
- 32
- 33
- name: Debian
versions:
- 'buster'
- name: EL
versions:
- 6
- 7
- 8
- 9
- name: Fedora
versions:
- 31
- 32
- 33
- 34
- 35
- 36
- name: Debian
versions:
- 'buster'
- 'bullseye'
galaxy_tags:
- clustering
- pacemaker
- clustering
- pacemaker
dependencies:
- { role: ondrejhome.pcs-modules-2 }
- {role: ondrejhome.pcs-modules-2}
13 changes: 13 additions & 0 deletions tasks/almalinux_repos.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,16 @@
'HighAvailability' not in yum_repolist.stdout
and enable_repos | bool
and ansible_distribution_major_version in ['8']
- name: enable highavailability repository (AlmaLinux 9)
ini_file:
dest: '/etc/yum.repos.d/almalinux-highavailability.repo'
section: 'highavailability'
option: 'enabled'
value: '1'
create: 'no'
mode: '0644'
when: >-
'HighAvailability' not in yum_repolist.stdout
and enable_repos | bool
and ansible_distribution_major_version in ['9']
54 changes: 52 additions & 2 deletions tasks/centos_repos.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
changed_when: false
check_mode: false

- name: enable highavailability repository (CentOS 8.1/8.2)
- name: EOL enable highavailability repository (CentOS 8.1, 8.2)
ini_file:
dest: '/etc/yum.repos.d/CentOS-HA.repo'
section: 'HighAvailability'
Expand All @@ -20,7 +20,7 @@
and enable_repos | bool
and ansible_distribution_version in ['8.1', '8.2']
- name: enable highavailability repository (CentOS 8.3+)
- name: EOL enable highavailability repository (CentOS 8.3, 8.4, 8.5)
ini_file:
dest: '/etc/yum.repos.d/CentOS-Linux-HighAvailability.repo'
section: 'ha'
Expand All @@ -34,6 +34,56 @@
and ansible_distribution_major_version in ['8']
and ansible_distribution_version not in ['8.0', '8.1', '8.2', '8']
- name: EOL disable mirrorlist for CentOS Linux 8.1, 8.2 HA repository
ini_file:
dest: '/etc/yum.repos.d/CentOS-HA.repo'
section: 'HighAvailability'
option: 'mirrorlist'
create: 'no'
mode: '0644'
state: absent
when: >-
enable_repos | bool
and ansible_distribution_version in ['8.1', '8.2']
- name: EOL disable mirrorlist for CentOS Linux 8.3, 8.4, 8.5 HA repository
ini_file:
dest: '/etc/yum.repos.d/CentOS-Linux-HighAvailability.repo'
section: 'ha'
option: 'mirrorlist'
create: 'no'
mode: '0644'
state: absent
when: >-
enable_repos | bool
and ansible_distribution_version in ['8.3', '8.4', '8.5']
- name: EOL configure baseurl for CentOS Linux 8.1, 8.2 HA repository
ini_file:
dest: '/etc/yum.repos.d/CentOS-HA.repo'
section: 'HighAvailability'
option: 'baseurl'
value: 'http://vault.centos.org/$contentdir/$releasever/HighAvailability/$basearch/os/'
create: 'no'
mode: '0644'
state: present
when: >-
enable_repos | bool
and ansible_distribution_version in ['8.1', '8.2']
- name: EOL configure baseurl for CentOS Linux 8.3, 8.4, 8.5 HA repository
ini_file:
dest: '/etc/yum.repos.d/CentOS-Linux-HighAvailability.repo'
section: 'ha'
option: 'baseurl'
value: 'http://vault.centos.org/$contentdir/$releasever/HighAvailability/$basearch/os/'
create: 'no'
mode: '0644'
state: present
when: >-
enable_repos | bool
and ansible_distribution_version in ['8.3', '8.4', '8.5']
- name: enable highavailability repository (CentOS 8 Stream)
ini_file:
dest: '/etc/yum.repos.d/CentOS-Stream-HighAvailability.repo'
Expand Down
2 changes: 1 addition & 1 deletion tasks/cluster_constraint_colocation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@
resource2_role: "{{ item.resource2_role | default(omit) }}"
score: "{{ item.score | default(omit) }}"
with_items: "{{ cluster_constraint_colocation }}"
run_once: True
run_once: true
2 changes: 1 addition & 1 deletion tasks/cluster_constraint_location.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@
resource2_role: "{{ item.resource2_role | default(omit) }}"
score: "{{ item.score | default(omit) }}"
with_items: "{{ cluster_constraint_location }}"
run_once: True
run_once: true
2 changes: 1 addition & 1 deletion tasks/cluster_constraint_order.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@
kind: "{{ item.kind | default(omit) }}"
symmetrical: "{{ item.symmetrical | default(omit) }}"
with_items: "{{ cluster_constraint_order }}"
run_once: True
run_once: true
2 changes: 1 addition & 1 deletion tasks/cluster_property.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@
node: "{{ item.node | default(omit) }}"
value: "{{ item.value | default(omit) }}"
with_items: "{{ cluster_property }}"
run_once: True
run_once: true
2 changes: 1 addition & 1 deletion tasks/cluster_resource.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@
force_resource_update: "{{ item.force_resource_update | default(omit) }}"
child_name: "{{ item.child_name | default(omit) }}"
with_items: "{{ cluster_resource }}"
run_once: True
run_once: true
2 changes: 1 addition & 1 deletion tasks/cluster_resource_defaults.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@
defaults_type: "{{ item.defaults_type | default(omit) }}"
value: "{{ item.value | default(omit) }}"
with_items: "{{ cluster_resource_defaults }}"
run_once: True
run_once: true
12 changes: 10 additions & 2 deletions tasks/debian10.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,20 @@
apt:
name: "{{ cluster_node_is_remote | bool | ternary(pacemaker_remote_packages, pacemaker_packages) }}"
state: 'present'
cache_valid_time: 3600

- name: Install package(s) for fence_kdump
apt:
name: "{{ fence_kdump_packages }}"
state: 'present'
cache_valid_time: 3600
when: cluster_configure_fence_kdump|bool

- name: Check if Corosync configuration is default configuration
command: '/usr/bin/dpkg --verify corosync'
register: result
changed_when: False
check_mode: False
changed_when: false
check_mode: false

- name: Destroy default configuration
pcs_cluster:
Expand Down
31 changes: 31 additions & 0 deletions tasks/debian11.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
---
- name: Install Pacemaker cluster packages to all nodes
apt:
name: "{{ cluster_node_is_remote | bool | ternary(pacemaker_remote_packages, pacemaker_packages) }}"
state: 'present'
cache_valid_time: 3600

- name: Install dependencies for pcs-modules-2
apt:
name: 'python3-distutils'
state: 'present'
cache_valid_time: 3600
when: ansible_distribution == 'Debian' and ansible_distribution_major_version == '11'

- name: Install package(s) for fence_kdump
apt:
name: "{{ fence_kdump_packages }}"
state: 'present'
cache_valid_time: 3600
when: cluster_configure_fence_kdump|bool

- name: Check if Corosync configuration is default configuration
command: '/usr/bin/dpkg --verify corosync'
register: result
changed_when: false
check_mode: false

- name: Destroy default configuration
pcs_cluster:
state: 'absent'
when: not result.stdout | regex_search(".* \/etc\/corosync\/corosync.conf$", multiline=True)
7 changes: 4 additions & 3 deletions tasks/fence_kdump.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
- name: Enable kdump service
service:
name: 'kdump'
name: "{{ kdump_service_name }}"
state: 'started'
enabled: true

Expand All @@ -12,9 +12,10 @@
name: "fence-kdump-{{ hostvars[item][cluster_hostname_fact] }}"
resource_class: 'stonith'
resource_type: 'fence_kdump'
options: "pcmk_host_list={{ hostvars[item][cluster_hostname_fact] }}"
options: "pcmk_host_list={{ hostvars[item][cluster_hostname_fact] }} {% if ansible_distribution == 'Debian' %}pcmk_monitor_action=metadata{% endif %}"
with_items: "{{ play_hosts }}"
run_once: true
# FIXME: fence_kdump on Debian returns exit code 1 for 'monitor' op so we use 'metadata' as dummy replacement

- name: create fence constraints
pcs_constraint_location:
Expand All @@ -33,6 +34,6 @@
resource_class: 'stonith'
resource_type: 'fence_kdump'
options: >-
pcmk_host_map="{% for item in groups['cluster_node_is_remote_False'] %}{{ hostvars[item][cluster_hostname_fact] }}:{{ hostvars[item]['vm_name'] }};{% endfor %}"
pcmk_host_map="{% for item in groups['cluster'+rand_id+'_node_is_remote_False'] %}{{ hostvars[item][cluster_hostname_fact] }}:{{ hostvars[item]['vm_name'] }};{% endfor %}"
run_once: true
when: cluster_configure_stonith_style is defined and cluster_configure_stonith_style == 'one-device-per-cluster'
4 changes: 2 additions & 2 deletions tasks/fence_vmware_rest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@

- name: Install fence_vmware_rest fencing agent on all nodes
yum:
name: "{{ fence_vmware_rest_packages }}"
name: "{{ fence_vmware_rest_packages }}"
state: 'installed'

- name: Configure separate stonith devices per cluster node
Expand Down Expand Up @@ -62,7 +62,7 @@
resource_class: 'stonith'
resource_type: 'fence_vmware_rest'
options: >-
pcmk_host_map="{% for item in groups['cluster_node_is_remote_False'] %}{{ hostvars[item][cluster_hostname_fact] }}:{{ hostvars[item]['vm_name'] }};{% endfor %}"
pcmk_host_map="{% for item in groups['cluster'+rand_id+'_node_is_remote_False'] %}{{ hostvars[item][cluster_hostname_fact] }}:{{ hostvars[item]['vm_name'] }};{% endfor %}"
ipaddr={{ fence_vmware_ipaddr }}
login={{ fence_vmware_login }}
passwd={{ fence_vmware_passwd }}
Expand Down
4 changes: 2 additions & 2 deletions tasks/fence_vmware_soap.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

- name: Install fence_vmware_soap fencing agent on all nodes
yum:
name: "{{ fence_vmware_soap_packages }}"
name: "{{ fence_vmware_soap_packages }}"
state: 'installed'

- name: Configure separate stonith devices per cluster node
Expand Down Expand Up @@ -57,7 +57,7 @@
resource_class: 'stonith'
resource_type: 'fence_vmware_soap'
options: >-
pcmk_host_map="{% for item in groups['cluster_node_is_remote_False'] %}{{ hostvars[item][cluster_hostname_fact] }}:{{ hostvars[item]['vm_name'] }};{% endfor %}"
pcmk_host_map="{% for item in groups['cluster'+rand_id+'_node_is_remote_False'] %}{{ hostvars[item][cluster_hostname_fact] }}:{{ hostvars[item]['vm_name'] }};{% endfor %}"
ipaddr={{ fence_vmware_ipaddr }}
login={{ fence_vmware_login }}
passwd={{ fence_vmware_passwd }}
Expand Down
4 changes: 2 additions & 2 deletions tasks/fence_xvm.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
state: 'enabled'
immediate: true
when: >-
(ansible_distribution_major_version in [ "7", "8" ] or ansible_distribution == 'Fedora')
(ansible_distribution_major_version in [ "7", "8", "9" ] or ansible_distribution == 'Fedora')
and cluster_firewall|bool
- name: Configure separate stonith devices per cluster node
Expand Down Expand Up @@ -58,7 +58,7 @@
resource_class: 'stonith'
resource_type: 'fence_xvm'
options: >-
pcmk_host_map="{% for item in groups['cluster_node_is_remote_False'] %}{{ hostvars[item][cluster_hostname_fact] }}:{{ hostvars[item]['vm_name'] }};{% endfor %}"
pcmk_host_map="{% for item in groups['cluster'+rand_id+'_node_is_remote_False'] %}{{ hostvars[item][cluster_hostname_fact] }}:{{ hostvars[item]['vm_name'] }};{% endfor %}"
op monitor interval=30s
run_once: true
when: cluster_configure_stonith_style is defined and cluster_configure_stonith_style == 'one-device-per-cluster'
Expand Down
Loading

0 comments on commit 8bc41af

Please sign in to comment.