Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EL7 (CentOS 7.6) After Native ZFS Installation, Second Array causes grub to fail: No Device found #231

Open
Makr91 opened this issue Jun 17, 2019 · 2 comments

Comments

@Makr91
Copy link

Makr91 commented Jun 17, 2019

So after following the procedures and learning the hard way to read the instructions fully, a few times. I was able to get a bootable installation using a ZFS stripe with two SSDs using the instructions. I am fairly new to ZFS forgive my ignorance. I wanted to have two Arrays, rpool for the base OS with the two aforementioned striped SSDs, and then add another array to the machine as Array-0. I would prefer this second array to be striped like the first, however mirroring it is fine as well. This is on a physical machine and not a VM. I create Array-0 with the same command line instructions used to create the rpool.

The exact error is after post: error : no such device XXXXXXXXXXXXXXXX

When I take the two drives out in the second array, the machine will boot correctly. However placing them back in, and recreating Array-0, then rebooting recreates the issue.

For the secondary array, do I need to modify some file like /etc/rc.conf as I have seen in a few searches? ZFS_enabled=true .

I have tried to create these via GPT Labels (ie /dev/sda-d etc) and by /dev/disk/by-id/

Should I try to create the arrays via the WWNXXXXXXXXXXXX?

Here is what my current /dev/disk/by-id/ looks like after booting the without the two drives:

ata-KINGSTON_SA400S37240G_50026B7682B69107
ata-KINGSTON_SA400S37240G_50026B7682B69107-part1
ata-KINGSTON_SA400S37240G_50026B7682B69107-part9
ata-KINGSTON_SA400S37240G_50026B7682B69AAD
ata-KINGSTON_SA400S37240G_50026B7682B69AAD-part1
ata-KINGSTON_SA400S37240G_50026B7682B69AAD-part9
wwn-0x50026b7682b69107
wwn-0x50026b7682b69107-part1
wwn-0x50026b7682b69107-part9
wwn-0x50026b7682b69aad
wwn-0x50026b7682b69aad-part1
wwn-0x50026b7682b69aad-part9

and /dev
sda
sda1
sda9
sdb
sdb1
sdb9

and After I add the two drives:

ata-KINGSTON_SA400S37240G_50026B7682B69107
ata-KINGSTON_SA400S37240G_50026B7682B69107-part1
ata-KINGSTON_SA400S37240G_50026B7682B69107-part9
ata-KINGSTON_SA400S37240G_50026B7682B69AAD
ata-KINGSTON_SA400S37240G_50026B7682B69AAD-part1
ata-KINGSTON_SA400S37240G_50026B7682B69AAD-part9
ata-Hitachi_HUA723030ALA640_MK0371YHJ2RLSA
ata-Hitachi_HUA723030ALA640_MK0371YHJ2RLSA-part1
ata-Hitachi_HUA723030ALA640_MK0371YHJ2RLSA-part9
ata-Hitachi_HUA723030ALA640_MK0371YHJ2AW1
ata-Hitachi_HUA723030ALA640_MK0371YHJ2AW1-part1
ata-Hitachi_HUA723030ALA640_MK0371YHJ2AW1-part9
wwn-0x50026b7682b69107
wwn-0x50026b7682b69107-part1
wwn-0x50026b7682b69107-part9
wwn-0x50026b7682b69aad
wwn-0x50026b7682b69aad-part1
wwn-0x50026b7682b69aad-part9
wwn-0x5000cca225dd4090
wwn-0x5000cca225dd4090-part1
wwn-0x5000cca225dd4090-part9
wwn-0x5000cca225dd6c9b
wwn-0x5000cca225dd6c9b-part1
wwn-0x5000cca225dd6c9b-part9
`

lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 223.1G 0 part
└─sda9 8:9 0 512.6M 0 part
sdb 8:16 0 223.6G 0 disk
├─sdb1 8:17 0 223.1G 0 part
└─sdb9 8:25 0 512.6M 0 part

After:

lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.6G 0 disk
├─sda1 8:1 0 223.1G 0 part
└─sda9 8:9 0 512.6M 0 part
sdb 8:16 0 223.6G 0 disk
├─sdb1 8:17 0 223.1G 0 part
└─sdb9 8:25 0 512.6M 0 part
sdc 8:32 0 2.7T 0 disk
├─sdc1 8:33 0 2.7T 0 part
└─sdc9 8:41 0 512.5M 0 part
sdd 8:48 0 2.7T 0 disk
├─sdd1 8:49 0 2.7T 0 part
└─sdd9 8:57 0 512.5M 0 part

Here is the output of zpool status and list forafter Array-0 is created:

zfs list

NAME USED AVAIL REFER MOUNTPOINT
Array-0 640K 5.27T 136K /Array-0
rpool 2.13G 428G 156K /rpool
rpool/ROOT 2.13G 428G 2.13G /rpool/ROOT

Commands used to create the pools:

rpool

zpool create -d -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@lz4_compress=enabled -o ashift=12 -O compression=lz4 rpool ata-KINGSTON_SA400S37240G_50026B7682B69107-part1 ata-KINGSTON_SA400S37240G_50026B7682B69AAD-part1

Array-0

zpool create -d -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@lz4_compress=enabled -o ashift=12 -O compression=lz4 Array-0 /dev/disk/by-id/ata-Hitachi_HUA723030ALA640_MK0371YHJ2RLSA-part1 /dev/disk/by-id/ata-Hitachi_HUA723030ALA640_MK0371YHJ2AW1D-part1

I also run these same command with the GPT labels in lieu of the id.

I install grub to all disks:
grub2-install --boot-directory=/boot /dev/disk/by-id/ata-Hitachi_HUA723030ALA640_MK0371YHJ2AW1D
grub2-install --boot-directory=/boot /dev/disk/by-id/ata-Hitachi_HUA723030ALA640_MK0371YHJ2RLSA
grub2-install --boot-directory=/boot /dev/disk/by-id/ata-KINGSTON_SA400S37240G_50026B7682B69107
grub2-install --boot-directory=/boot /dev/disk/by-id/ata-KINGSTON_SA400S37240G_50026B7682B69AAD

@Makr91
Copy link
Author

Makr91 commented Jun 17, 2019

Solved: Looks like my grub2 device map file was incorrect. I adjusted it to:

/boot/grub2/device.map
(hd0) /dev/sda
(hd1) /dev/sdb

from:
(hd0) /dev/sdc
(hd1) /dev/sdc

@Makr91
Copy link
Author

Makr91 commented Jun 24, 2019

This issue still occurs. I found that if I remove the second array before boot. Then add it after Grub has loaded it is able to boot full and the ZFS volume is mounted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant