You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description of problem:
Glusterd crashes frequently
Expected results:
To not crash
Mandatory info: - The output of the gluster volume info command:
Volume Name: var-data
Type: Replicate
Volume ID: f4aa5185-e286-4903-a6f1-67458f0c2541
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: XXXX-1:/data/brick/var-data
Brick2: XXXX-2:/data/brick/var-data
Brick3: XXXXX:/data/brick (arbiter)
Options Reconfigured:
cluster.favorite-child-policy: size
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
performance.client-io-threads: off
cluster.data-self-heal: on
cluster.metadata-self-heal: on
cluster.entry-self-heal: on
cluster.self-heal-daemon: enable
cluster.shd-max-threads: 4
disperse.shd-wait-qlength: 2048
cluster.shd-wait-qlength: 2048
- The output of the gluster volume status command:
Status of volume: var-data
Gluster process TCP Port RDMA Port Online Pid
Brick XXXX-1:/data/brick/var-data 49788 0 Y 3262015
Brick XXXX-2:/data/brick/var-data 57637 0 Y 2016530
Brick XXXXX:/data/brick 52467 0 Y 5671
Self-heal Daemon on localhost N/A N/A Y 2016548
Self-heal Daemon on XXXX-1 N/A N/A Y 3262033
Self-heal Daemon on XXXXX N/A N/A Y 5242
Task Status of Volume var-data
There are no active volume tasks
- The output of the gluster volume heal command: (it's running so summary)
Brick XXXX-1:/data/brick/var-data
Status: Connected
Total Number of entries: 150
Number of entries in heal pending: 150
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick XXXX-2:/data/brick/var-data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick XXXXX:/data/brick
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/
lots of logs, will provide if needed
**- Is there any crash ? Provide the backtrace and coredump
Oct 01 04:29:57 XXXX-2 systemd-coredump[2003711]: [🡕] Process 682965 (glusterfsd) of user 0 dumped core.
That might be, I have no experience on building GlusterFS on RHEL/Rocky 9 but I'll take a look if new release is not coming in the near future and try that patch.
Description of problem:
Glusterd crashes frequently
Expected results:
To not crash
Mandatory info:
- The output of the
gluster volume info
command:Volume Name: var-data
Type: Replicate
Volume ID: f4aa5185-e286-4903-a6f1-67458f0c2541
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: XXXX-1:/data/brick/var-data
Brick2: XXXX-2:/data/brick/var-data
Brick3: XXXXX:/data/brick (arbiter)
Options Reconfigured:
cluster.favorite-child-policy: size
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
performance.client-io-threads: off
cluster.data-self-heal: on
cluster.metadata-self-heal: on
cluster.entry-self-heal: on
cluster.self-heal-daemon: enable
cluster.shd-max-threads: 4
disperse.shd-wait-qlength: 2048
cluster.shd-wait-qlength: 2048
- The output of the
gluster volume status
command:Status of volume: var-data
Gluster process TCP Port RDMA Port Online Pid
Brick XXXX-1:/data/brick/var-data 49788 0 Y 3262015
Brick XXXX-2:/data/brick/var-data 57637 0 Y 2016530
Brick XXXXX:/data/brick 52467 0 Y 5671
Self-heal Daemon on localhost N/A N/A Y 2016548
Self-heal Daemon on XXXX-1 N/A N/A Y 3262033
Self-heal Daemon on XXXXX N/A N/A Y 5242
Task Status of Volume var-data
There are no active volume tasks
- The output of the
gluster volume heal
command: (it's running so summary)Brick XXXX-1:/data/brick/var-data
Status: Connected
Total Number of entries: 150
Number of entries in heal pending: 150
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick XXXX-2:/data/brick/var-data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick XXXXX:/data/brick
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/
lots of logs, will provide if needed
**- Is there any crash ? Provide the backtrace and coredump
Oct 01 04:29:57 XXXX-2 systemd-coredump[2003711]: [🡕] Process 682965 (glusterfsd) of user 0 dumped core.
Additional info:
- The operating system / glusterfs version:
Rocky Linux 9.4 / GlusterFS 11.1
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
The text was updated successfully, but these errors were encountered: