Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with deploy glusterfs in ovirt node #137

Open
AlexProfi opened this issue Feb 26, 2023 · 2 comments
Open

Error with deploy glusterfs in ovirt node #137

AlexProfi opened this issue Feb 26, 2023 · 2 comments

Comments

@AlexProfi
Copy link

AlexProfi commented Feb 26, 2023

Hello I get the same error
selinux is off
I found previous issuere here but advices from there don't help

try package in master and version 1.0.4 and pack from iso image
TASK [gluster.infra/roles/backend_setup : Group devices by volume group name, including existing devices] ***
fatal: [brest2.f.com]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'str object' has no attribute 'vgname'\n\nThe error appears to be in '/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Group devices by volume group name, including existing devices\n ^ here\n"}/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml'

this is my deploy file single node gfs

hc_nodes:
  hosts:
    brest2.f.com:
      gluster_infra_volume_groups:
        - vgname: gluster_vg_sdb
          pvname: /dev/sdb
      gluster_infra_mount_devices:
        - path: /gluster_bricks/engine
          lvname: gluster_lv_engine
          vgname: gluster_vg_sdb
        - path: /gluster_bricks/data
          lvname: gluster_lv_data
          vgname: gluster_vg_sdb
        - path: /gluster_bricks/vmstore
          lvname: gluster_lv_vmstore
          vgname: gluster_vg_sdb
      blacklist_mpath_devices:
        - sdb
      gluster_infra_thick_lvs:
        - vgname: gluster_vg_sdb
          lvname: gluster_lv_engine
          size: 100G
      gluster_infra_thinpools:
        - vgname: gluster_vg_sdb
          thinpoolname: gluster_thinpool_gluster_vg_sdb
          poolmetadatasize: 3G
      gluster_infra_lv_logicalvols:
        - vgname: gluster_vg_sdb
          thinpool: gluster_thinpool_gluster_vg_sdb
          lvname: gluster_lv_data
          lvsize: 500G
        - vgname: gluster_vg_sdb
          thinpool: gluster_thinpool_gluster_vg_sdb
          lvname: gluster_lv_vmstore
          lvsize: 500G
  vars:
    gluster_infra_disktype: RAID6
    gluster_infra_stripe_unit_size: 256
    gluster_infra_diskcount: 10
    gluster_set_selinux_labels: true
    gluster_infra_fw_ports:
      - 2049/tcp
      - 54321/tcp
      - 5900/tcp
      - 5900-6923/tcp
      - 5666/tcp
      - 16514/tcp
    gluster_infra_fw_permanent: true
    gluster_infra_fw_state: enabled
    gluster_infra_fw_zone: public
    gluster_infra_fw_services:
      - glusterfs
    gluster_features_force_varlogsizecheck: false
    cluster_nodes:
      - brest2.f.com
    gluster_features_hci_cluster: '{{ cluster_nodes }}'
    gluster_features_hci_volumes:
      - volname: engine
        brick: /gluster_bricks/engine/engine
        arbiter: 0
      - volname: data
        brick: /gluster_bricks/data/data
        arbiter: 0
      - volname: vmstore
        brick: /gluster_bricks/vmstore/vmstore
        arbiter: 0
    gluster_features_hci_volume_options:
      storage.owner-uid: '36'
      storage.owner-gid: '36'
      features.shard: 'on'
      performance.low-prio-threads: '32'
      performance.strict-o-direct: 'on'
      network.remote-dio: 'off'
      network.ping-timeout: '30'
      user.cifs: 'off'
      nfs.disable: 'on'
      performance.quick-read: 'off'
      performance.read-ahead: 'off'
      performance.io-cache: 'off'
      cluster.eager-lock: enable

how to resolve it i use ovirt 4.5 node iso

@charnet1019
Copy link

charnet1019 commented Mar 22, 2023

Same issue. Are you fixed it?

@liutaurasa
Copy link

I could bypass the issue adding pvs key to all lvm definitions and fixing ansible tasks in few places.
Below is my configuration (removed config of other nodes as they are identical to node1, so no need to show it here).
Please note that elements in gluster_infra_thick_lvs, gluster_infra_thinpools and gluster_infra_lv_logicalvols arrays have pvs keys.

hc_nodes:
  hosts:
    node1:
      gluster_infra_volume_groups:
        - vgname: vg_nvme
          pvname: /dev/nvme0n1
        - vgname: vg_sata
          pvname: /dev/sda
        - vgname: vg_infra
          pvname: /dev/md3
      gluster_infra_mount_devices:
        - path: /gluster_bricks/engine
          lvname: lv_engine
          vgname: vg_infra
        - path: /gluster_bricks/data-sata
          lvname: lv_data
          vgname: vg_sata
        - path: /gluster_bricks/vmstore-sata
          lvname: lv_vmstore
          vgname: vg_sata
        - path: /gluster_bricks/data-nvme
          lvname: lv_data
          vgname: vg_nvme
        - path: /gluster_bricks/vmstore-nvme
          lvname: lv_vmstore
          vgname: vg_nvme
      blacklist_mpath_devices:
        - nvme0n1
        - nvme1n1
        - nvme2n1
        - sda
        - md3
      #gluster_infra_cache_vars:
      #  - vgname: vg_sata
      #    cachedisk: /dev/md3
      #    cachelvname: lv_cache
      #    cachetarget: thinpool_vg_sata
      #    cachelvsize: 5G
      #    cachemode: writethrough
      gluster_infra_thick_lvs:
        - vgname: vg_infra
          lvname: lv_engine
          size: 100G
          pvs: /dev/md3
      gluster_infra_thinpools:
        - vgname: vg_nvme
          thinpoolname: thinpool_nvme
          thinpoolsize: 100G
          poolmetadatasize: 3G
          pvs: /dev/nvme0n1
        - vgname: vg_sata
          thinpoolname: thinpool_sata
          thinpoolsize: 1T
          poolmetadatasize: 3G
          pvs: /dev/sda
      gluster_infra_lv_logicalvols:
        - vgname: vg_sata
          thinpool: thinpool_sata
          lvname: lv_data
          lvsize: 1T
          pvs: /dev/sda
        - vgname: vg_nvme
          thinpool: thinpool_nvme
          lvname: lv_data
          lvsize: 200G
          pvs: /dev/nvme0n1
        - vgname: vg_sata
          thinpool: thinpool_sata
          lvname: lv_vmstore
          lvsize: 1T
          pvs: /dev/sda
        - vgname: vg_nvme
          thinpool: thinpool_nvme
          lvname: lv_vmstore
          lvsize: 200G
          pvs: /dev/nvme0n1
...

here is the diff of changes I've made to backend_setuo tasks:

diff -Nupr ./cache_setup.yml /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/cache_setup.yml
--- ./cache_setup.yml	2023-03-23 09:46:02.693167826 +0100
+++ /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/cache_setup.yml	2023-03-24 13:07:08.867586480 +0100
@@ -7,7 +7,7 @@
 # caching)
 
 - name: Extend volume group
-  command: "vgextend --dataalignment 256K {{ item.vgname }} {{ item.cachedisk.split(',')[1] }}"
+  command: "vgextend --dataalignment 256K {{ item.vgname }} {{ item.cachedisk.split(',')[0] }}"
   # lvg:
   #    state: present
   #    vg: "{{ item.vgname }}"
@@ -29,7 +29,7 @@
 
 - name: Create LV for cache
   ansible.builtin.expect:
-    command: "lvcreate -L {{ item.cachelvsize }} -n {{ item.cachelvname }} {{ item.vgname }}"
+    command: "lvcreate --yes -L {{ item.cachelvsize }} -n {{ item.cachelvname }} {{ item.vgname }}"
     responses:
       (.*)WARNING:(.*): "y"
   # lvol:
@@ -40,7 +40,7 @@
   with_items: "{{ gluster_infra_cache_vars }}"
 
 - name: Create metadata LV for cache
-  command: "lvcreate -L {{ item.cachemetalvsize }} -n {{ item.cachemetalvname }} {{ item.vgname }}"
+  command: "lvcreate --yes -L {{ item.cachemetalvsize }} -n {{ item.cachemetalvname }} {{ item.vgname }}"
   # lvol:
   #    state: present
   #    vg: "{{ item.vgname }}"
diff -Nupr ./thick_lv_create.yml /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml
--- ./thick_lv_create.yml	2023-03-23 09:46:02.693167826 +0100
+++ /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml	2023-03-24 13:07:08.868586483 +0100
@@ -26,7 +26,6 @@
       {%- set output=[] -%}
       {%- for cnf in gluster_infra_thick_lvs -%}
       {%- if cnf is defined and cnf is not none and cnf.vgname is defined
-            and (cnf.pvs is defined)
       -%}
       {{- output.append({"vgname": cnf.vgname, "raid": cnf.raid | default() , "pvname": (cnf.pvs|default('')).split(',') | select | list | unique | join(',')}) -}}
       {%- endif -%}
@@ -50,7 +49,7 @@
          {%- endif -%}
          {%- else -%}
          {{- 4 -}}
-         {%- endif -%} {{ (item.value | first).vgname }} {{ item.value | json_query('[].pvname') | unique | join(',') }}  {% if (item.value | first).raid is defined and (item.value | first).raid is not none
+         {%- endif %} {{ (item.value | first).vgname }} {{ item.value | json_query('[].pvname') | unique | join(',') }}  {% if (item.value | first).raid is defined and (item.value | first).raid is not none
             and (item.value | first).raid.level is defined and (item.value | first).raid.devices is defined and (item.value | first).raid.stripe is defined
             and (item.value | first).raid.level in [0,5,6,10]%}
          {% if (item.value | first).raid.level == 0 %}
@@ -102,6 +101,7 @@
   #       {%- else -%}
   #       {{- 4 -}}
   #       {%- endif -%}
+  failed_when: gluster_changed_vgs.rc is defined and gluster_changed_vgs.rc not in [0, 3, 5]
   loop: "{{ gluster_volumes_by_groupname | dict2items if gluster_volumes_by_groupname is defined and gluster_volumes_by_groupname is not none else [] }}"
   loop_control:
    index_var: index
@@ -116,7 +116,7 @@
 
 # Create a thick logical volume.
 - name: Create thick logical volume
-  command: "lvcreate -L {{ item.size | default('100%FREE') }}  -n {{ item.lvname }} {{ item.pvs | default() }} {{ item.vgname }} "
+  command: "lvcreate --yes -L {{ item.size | default('100%FREE') }}  -n {{ item.lvname }} {{ item.vgname }} {{ item.pvs | default() }} "
   #lvol:
   #  state: present
   #  vg: "{{ item.vgname }}"
@@ -126,6 +126,8 @@
   #  opts: "{{ item.opts | default() }}"
   #  shrink: "{{ item.shrink if item.shrink is defined and item.shrink is not none else true }}"
   with_items: "{{ gluster_infra_thick_lvs }}"
+  register: lvcreate_results
+  failed_when: lvcreate_results.rc is defined and lvcreate_results.rc not in [0, 5]
   loop_control:
    index_var: index
   when: item is not none and lv_device_exists.results[index].stdout_lines is defined and "0" not in lv_device_exists.results[index].stdout_lines
diff -Nupr ./thin_pool_create.yml /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml
--- ./thin_pool_create.yml	2023-03-23 09:46:02.693167826 +0100
+++ /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml	2023-03-24 13:07:08.867586480 +0100
@@ -63,7 +63,6 @@
       {%- for cnf in gluster_infra_thinpools -%}
       {%- if cnf is defined and cnf is not none and cnf.thinpoolname is defined and cnf.vgname is defined
             and (thinpool_attrs.results[loop.index0].stdout is not defined or thinpool_attrs.results[loop.index0].stdout.find("t") != 0)
-            and (cnf.meta_pvs is defined or cnf.pvs is defined)
       -%}
       {{- output.append({"vgname": cnf.vgname, "pvname": (cnf.pvs|default('') ~ ',' ~ (cnf.meta_pvs|default(''))).split(',') | select | list | unique | join(',')}) -}}
       {%- endif -%}
@@ -91,7 +90,7 @@
          {%- endif -%}
          {%- else -%}
          {{- 4 -}}
-         {%- endif -%} {{ (item.value | first).vgname }} {{ item.value | json_query('[].pvname') | unique | join(',') }} {% if (item.value | first).raid is defined and (item.value | first).raid is not none
+         {%- endif %} {{ (item.value | first).vgname }} {{ item.value | json_query('[].pvname') | unique | join(',') }} {% if (item.value | first).raid is defined and (item.value | first).raid is not none
             and (item.value | first).raid.level is defined and (item.value | first).raid.devices is defined and (item.value | first).raid.stripe is defined
             and (item.value | first).raid.level in [0,5,6,10]%}
          {% if (item.value | first).raid.level == 0 %}
@@ -143,6 +142,7 @@
   #       {%- else -%}
   #       {{- 4 -}}
   #       {%- endif -%}
+  failed_when: gluster_changed_vgs.rc is defined and gluster_changed_vgs.rc not in [0, 3, 5]
   loop: "{{ gluster_volumes_by_groupname | dict2items }}"
   loop_control:
    index_var: index
@@ -156,7 +156,7 @@
   when: gluster_changed_vgs.changed
 
 - name: Create a LV thinpool-data
-  command: "lvcreate -l {{ item.thinpoolsize | default('100%FREE') }} --options {{ item.opts | default('') }}  -n {{ item.thinpoolname }} {{ item.pvs | default() }} {{ item.vgname }} "
+  command: "lvcreate --yes -l {{ item.thinpoolsize | default('100%FREE') }} --options {{ item.opts | default('') }}  -n {{ item.thinpoolname }} {{ item.pvs | default() }} {{ item.vgname }} "
   # lvol:
   #    state: present
   #    shrink: false
@@ -178,7 +178,7 @@
 
 
 - name: Create a LV thinpool-meta
-  command: "lvcreate -l {{ item.poolmetadatasize | default('16G') }} --options {{ ((item.meta_opts is defined and item.meta_opts) or item.opts) | default('') }}  -n {{ item.thinpoolname }}_meta {{ ((item.meta_pvs is defined and item.meta_pvs) or item.pvs) | default() }} {{ item.vgname }} "
+  command: "lvcreate --yes -l {{ item.poolmetadatasize | default('16G') }} --options {{ ((item.meta_opts is defined and item.meta_opts) or item.opts) | default('') }}  -n {{ item.thinpoolname }}_meta {{ ((item.meta_pvs is defined and item.meta_pvs) or item.pvs) | default() }} {{ item.vgname }} "
   # lvol:
   #    state: present
   #    shrink: false
@@ -232,7 +232,7 @@
 
 
 - name: Create a LV thinpool
-  command: "lvcreate {% if item.thinpoolsize is defined  %} -L {{ item.thinpoolsize }} {% else %} -l 100%FREE  {% endif %} --options {% if item.raid is defined and item.raid is not none
+  command: "lvcreate --yes {% if item.thinpoolsize is defined  %} -L {{ item.thinpoolsize }} {% else %} -l 100%FREE  {% endif %} --options {% if item.raid is defined and item.raid is not none
             and item.raid.level is defined and item.raid.devices is defined and item.raid.stripe is defined
             and item.raid.level in [0,5,6,10]%}
          {% if item.raid.level == 0 %}
@@ -286,7 +286,7 @@
 #end-block
 
 - name: Create a LV thinpool for similar device types
-  command: "lvcreate --type thin-pool --zero n {% if item.thinpoolsize is defined  %} -L {{ item.thinpoolsize }} {% else %} -l 100%FREE  {% endif %} --chunksize {{ lv_chunksize }} --poolmetadatasize {{ item.poolmetadatasize + \"iB\" }} -n {{ item.thinpoolname }} {{ item.vgname }} "
+  command: "lvcreate --yes --type thin-pool --zero n {% if item.thinpoolsize is defined  %} -L {{ item.thinpoolsize }} {% else %} -l 100%FREE  {% endif %} --chunksize {{ lv_chunksize }} --poolmetadatasize {{ item.poolmetadatasize + \"iB\" }} -n {{ item.thinpoolname }} {{ item.vgname }} "
   # lvol:
   #    state: present
   #    shrink: false
@@ -296,6 +296,8 @@
   #    opts: " --chunksize {{ lv_chunksize }}
   #            --poolmetadatasize {{ item.poolmetadatasize }}
   #            --zero n"
+  register: lvcreate_thin_pools_results
+  failed_when: lvcreate_thin_pools_results.rc is defined and lvcreate_thin_pools_results.rc not in [0, 5]
   with_items: "{{ gluster_infra_thinpools }}"
   when: gluster_infra_thinpools is defined and item.raid is not defined
 
diff -Nupr ./thin_volume_create.yml /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_volume_create.yml
--- ./thin_volume_create.yml	2023-03-23 09:46:02.693167826 +0100
+++ /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_volume_create.yml	2023-03-24 13:07:08.867586480 +0100
@@ -61,6 +61,7 @@
   #    vg: "{{ (item.value | first).vgname }}"
   #    pvs: "{{ item.value | json_query('[].pvname') | unique | join(',') }}"
   #    pv_options: "--dataalignment 256K"
+  failed_when: gluster_changed_vgs.rc is defined and gluster_changed_vgs.rc not in [0, 3, 5]
   loop: "{{ gluster_volumes_by_groupname | dict2items }}"
   loop_control:
    index_var: index
@@ -75,7 +76,7 @@
 
 
 - name: Create a LV thinlv-data
-  command: "lvcreate -l {{ item.lvsize | default('100%FREE') }} --options {{ item.opts | default('') }}  -n {{ item.lvname }} {{ item.pvs | default() }} {{ item.vgname }} "
+  command: "lvcreate --yes -l {{ item.lvsize | default('100%FREE') }} --options {{ item.opts | default('') }}  -n {{ item.lvname }} {{ item.pvs | default() }} {{ item.vgname }} "
   # lvol:
   #    state: present
   #    shrink: false
@@ -86,6 +87,8 @@
   #    opts: "
   #            {{ item.opts | default('') }} "
   with_items: "{{ gluster_infra_lv_logicalvols }}"
+  register: lvcreate_results
+  failed_when: lvcreate_results.rc is defined and lvcreate_results.rc not in [0, 5]
   loop_control:
    index_var: index
   when: >
@@ -96,7 +99,7 @@
 
 
 - name: Create a LV thinlv-meta
-  command: "lvcreate -l {{ item.meta_size | default('16G') }} --options {{ ((item.meta_opts is defined and item.meta_opts) or item.opts) | default('') }}  -n {{ item.lvname }}_meta {{ ((item.meta_pvs is defined and item.meta_pvs) or item.pvs) | default() }} {{ item.vgname }} "
+  command: "lvcreate --yes -l {{ item.meta_size | default('16G') }} --options {{ ((item.meta_opts is defined and item.meta_opts) or item.opts) | default('') }}  -n {{ item.lvname }}_meta {{ ((item.meta_pvs is defined and item.meta_pvs) or item.pvs) | default() }} {{ item.vgname }} "
   # lvol:
   #    state: present
   #    shrink: false
@@ -107,6 +110,8 @@
   #    opts: "
   #            {{ ((item.meta_opts is defined and item.meta_opts) or item.opts) | default('') }} "
   with_items: "{{ gluster_infra_lv_logicalvols }}"
+  register: lvcreate_results
+  failed_when: lvcreate_results.rc is defined and lvcreate_results.rc not in [0, 5]
   loop_control:
    index_var: index
   when: >
@@ -133,7 +138,7 @@
 
 #this fails when the pool doesn't exist
 - name: Create thin logical volume
-  command: "lvcreate  -T {{ item.vgname }}/{{ item.thinpool }} -V {{ item.lvsize }} -n {{ item.lvname }}"
+  command: "lvcreate --yes  -T {{ item.vgname }}/{{ item.thinpool }} -V {{ item.lvsize }} -n {{ item.lvname }}"
   # lvol:
   #    state: present
   #    vg: "{{ item.vgname }}"
@@ -145,6 +150,8 @@
   #    opts: >
   #     {{ item.opts | default() }}
   with_items: "{{ gluster_infra_lv_logicalvols }}"
+  register: lvcreate_results
+  failed_when: lvcreate_results.rc is defined and lvcreate_results.rc not in [0, 5]
   loop_control:
    index_var: index
   when: >
diff -Nupr ./vg_create.yml /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml
--- ./vg_create.yml	2023-03-23 09:46:02.693167826 +0100
+++ /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml	2023-03-23 15:38:34.445005209 +0100
@@ -70,6 +70,7 @@
  #   pv_options: "--dataalignment {{ item.value.pv_dataalign | default(pv_dataalign) }}"
     # pesize is 4m by default for JBODs
  #   pesize: "{{ vg_pesize | default(4) }}"
+  failed_when: gluster_changed_vgs.rc is defined and gluster_changed_vgs.rc not in [0, 3, 5]
   loop: "{{gluster_volumes_by_groupname | default({}) | dict2items}}"
   when: gluster_volumes_by_groupname is defined and item.value|length>0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants