Replies: 1 comment 13 replies
-
This could be #15588, maybe, or it could be something stranger; I personally don't recall ever seeing someone report that message. I'm not even sure what it's supposed to mean - the whole point of 512e drives is to pretend to allow 512b aligned accesses, so erroring on that seems strange, and 4kn disks would just reject the request outright. Are the disks you're getting those from 512e or 4kn? |
Beta Was this translation helpful? Give feedback.
13 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello!
This issue came up a couple months ago and I can't figure out any solution online.
Every so often under normal operation, this error will show up in dmesg and the disks will get faulted.
If I do a scrub then it's guaranteed to happen and at minimum 2 of my drives will get faulted (sometimes 4). At first I assumed that the drives had just failed of old age and I replaced 5 of them before getting suspicious. The double drive faults always happen in the same VDEV. It's random drives every time. Sometimes it fails on write errors, sometimes read, seems random.
I replaced the backplane from a Supermicro 846TQ to 846A, but it still occurred.
I then replaced the HBA to an LSI 9207-8, but it still occurs. The HBA is also connected to a Intel RES2SV240.
In the array I have a mix of drives that are 512/4k and 4k native and ashift of 12. Don't know if this information helps but the error seems to imply that it could be an issue.
I have a 24 drive zpool with 8 drives per vdev in RAIDZ2. All drives are 8TB with a mix of various manufacturers and SAS/SATA.
Beta Was this translation helpful? Give feedback.
All reactions