Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot install native zfs root file system with zfs dailies 0.6.3-22~2d9d57~jessie on Debian Jessie #3047

Closed
azeemism opened this issue Jan 28, 2015 · 23 comments
Labels
Type: Building Indicates an issue related to building binaries

Comments

@azeemism
Copy link

Hi,

At first reboot, the screen freezes with the following message:

mount: mounting /sys on /root/sys failed: No such file or directory
mount: mounting /proc on /root/proc failed: No such file or directory
/init: line 335: can't open /root/dev/console: no such file
...
[end Kernel panic - not syncing: attempting to kill init! exitcode=0x00000200

After a cold boot the following message appears:

sh: argument expected <- repeated 4 times

Command: mount -o zfsutil -t zfs - /root
Message: filesystem '-' cannot be mounted, unable to open the dataset
mount: mounting - on /root failed: No such file or directory
Error: 1

Manually mount the root filesystem on /root and then exit.

BusyBox v1.22.1 (Debian ...

/bin/sh: can't access tty; job control turned off
/#

Install procedure outlined in #3046, apart from correcting root=ZFS=, unmounting /boot, /sys, /proc, exiting chroot, exprorting the pool, and rebooting. This may also be related to #3045 and perhaps even #3044 .

I have had no issue with this install process under the non-dailies zfs version (v0.6.3-766_gfde0d6d) even with /boot within the zpool. I was hoping to use the dailies version with the latest fixes, but that unfortunately doesn't look hopeful now.

Thanks for all that you do,

Azeem

@FransUrbo
Copy link
Contributor

Command: mount -o zfsutil -t zfs - /root
Message: filesystem '-' cannot be mounted, unable to open the dataset
mount: mounting - on /root failed: No such file or directory
Error: 1

This is an indication on what's going wrong. It doesn't know what root fs you've specified as /.
Install procedure outlined in #3046, apart from correcting root=ZFS=

So what did you set it as? Is that it 'root=ZFS=' and nothing more? Then that's wrong, it needs a value (a dataset).=

@azeemism
Copy link
Author

root=ZFS=spool/ROOT/debian

Even tried setting the following parameters in /etc/default/grub
boot=zfs rpool=spool bootfs=spool/ROOT/debian

This issue can be reproduced as follows:

VirtualBox Installation of Debian Jessie (tested in RAID 10)
sda1 - UEFI
sd[a-d]2 - unformatted for zfs
sd[a-d]3 - swap
sd[a-d]4 - /boot
sd[a-d]5 - /

Install zfs dailies,
create pool
zpool create -m none spool mirror /dev/disk/by-id /dev/disk/by-id mirror /dev/disk/by-id /dev/disk/by-id
create datasets for spool/ROOT/debian
add /etc/udev/rules.d/70-zfs-grub-fix.rules
from: zfsonlinux/grub#5
zpool export spool
zpool import -o altroot=/sysroot spool
zfs set mountpoint=/ spool/ROOT/debian
rsync -axv / /sysroot/
rsync -axv /dev/ /sysroot/dev/
chroot /sysroot /bin/bash
mount -t proc proc /proc
mount -t sysfs sysfs /sys
mount /boot
nano /etc/fstab
comment out line for /
can add or not add line for spool/ROOT/debian / zfs default,noatime 0 0
update-initramfs -c -k all
update-grub
fix grub.cfg so root=ZFS=spool/ROOT/debian
umount /boot
umount /sys
umount /proc
exit chroot
zpool export spool
reboot

@FransUrbo
Copy link
Contributor

root=ZFS=spool/ROOT/debian

Even tried setting the following parameters in /etc/default/grub
boot=zfs rpool=spool bootfs=spool/ROOT/debian

Either one of these should work!
comment out line for /
can add or not add line for spool/ROOT/debian / zfs default,noatime 0 0

First is correct. And you shouldn't add the second, the zfs utils will take care of that.
fix grub.cfg so root=ZFS=spool/ROOT/debian

Could you, just for verification set

root=spool/ROOT/debian

? It shouldn't matter, both is/should be supported…

You say 'mount /boot'. Where is that, on a separate partition or on a ZFS dataset?

Previously, you've also said:

mount: mounting /sys on /root/sys failed: No such file or directory
mount: mounting /proc on /root/proc failed: No such file or directory
/init: line 335: can't open /root/dev/console: no such file

This is something outside ZFS… For some reason, it looks like your initrd is broken somehow…

I have no idea what could have gone wrong!

@azeemism
Copy link
Author

My default in /etc/fstab is to
comment out /
and not add "spool/ROOT/debian / zfs default,noatime 0 0"

was just trying different things to get it to work.


Just edited the boot options through the grub menu. None of the following worked:

linux /vmlinuz-3.16.0-4-amd64 root=spool/ROOT/debian ro quiet

linux /vmlinuz-3.16.0-4-amd64 root=spool/ROOT/debian ro quiet boot=zfs rpool=spool bootfs=spool/ROOT/debian

linux /vmlinuz-3.16.0-4-amd64 root=zfs=spool/ROOT/debian ro quiet

linux /vmlinuz-3.16.0-4-amd64 root=zfs=spool/ROOT/debian ro quiet boot=zfs rpool=spool bootfs=spool/ROOT/debian


Yes /boot is on a separate partition RAID 10 configuration:

sda1 - UEFI
sd[a-d]2 - unformatted for zfs
sd[a-d]3 - swap
sd[a-d]4 - /boot
sd[a-d]5 - /

With the non-dailies,
I was able to move /, /boot, and swap under zfs. However I had to use mountpoint=legacy and set mount options through fstab to create datasets for /var /tmp and directores within /usr. Creating a dataset for /usr even if mounted through fstab broke the system--it would not boot. I was hoping the overlay option in the dailies install would allow me to mount everything through zfs and also allow me to create a /usr dataset so I could set a quota and devices=off.


Also when I don't add "boot=zfs rpool=spool bootfs=spool/ROOT/debian" in the grub menu boot command line, the message is slightly different:

modprobe: module unknow not found in modules.dep
mount: mounting ZFS=spool/ROOT/debian on /root failed: no such file or directory
mount: mounting /dev on /root/dev failed: No such file or directory
Target filesystem does't have requested /sbin/init.
no init found. Try passing init- bootarg.
modprobe: module ehci-orion not found in modules.dep

BusyBox v1.22.1 (Debian...

/bin/sh: can't access tty; job control tuned off
(initramfs)


It is possible that zfsutils did not install properly:
zfsonlinux/pkg-zfs#138

or

The pool imported incorrectly:
#3043

or

Grub is not recognizing zfs as a filesystem:
zfsonlinux/grub#19
zfsonlinux/grub#18

or something else...but whatever it is, for the moment building a native zfs root file system with the dailies install is not working. :(

@FransUrbo
Copy link
Contributor

Creating a dataset for /usr even if mounted through fstab broke the system--it would not boot.

You would need to set the ZFS_INITRD_ADDITIONAL_DATASETS variable in /etc/default/zfs (see the comments about the variable) and set mountpoint=/usr in the dataset (and then recreate the initrd).

And this is also true for any other dataset/filesystem besides /.

modprobe: module unknow not found in modules.dep
mount: mounting ZFS=spool/ROOT/debian on /root failed: no such file or directory

I see two problems here - 'module unknown' (why unknown!?) and "ZFS=spool/ROOT/debian" (it should not contain the 'ZFS=' part).
mount: mounting /dev on /root/dev failed: No such file or directory

The rest of this is just because it didn't/couldn't mount the root fs/dataset so they are 'expected'.
It is possible that zfsutils did not install properly:
zfsonlinux/pkg-zfs#138

That should not stop the mounting etc in the initrd. It would probably give problems afterwards, when the system/systemd is booting/starting.
The pool imported incorrectly:
#3043

Have nothing to do with the mounting of the datasets. Only in the import (which apparently is done). When you get the initramfs failure shell, what do you see if you run 'zpool status'? Looks ok?
Grub is not recognizing zfs as a filesystem:
zfsonlinux/grub#19
zfsonlinux/grub#18

No, that would only affect the 'root=…' etc command line(s), but since you change that manually, it should be ok. Also, since you don't have /boot on ZFS, it shouldn't matter.
or something else…

Or something else indeed. I honestly have no idea!=

@raptwa
Copy link

raptwa commented Feb 5, 2015

This also happened to me, when installing the zfs dailies 0.6.3-25 yesterday. The install procedure is the same as described by azeemism.

img_3191

This is what happens, when manually importing the root pool in busybox.

Basic setup:

mypool/ROOT/debian with mountpoint=legacy
fstab: mypool/ROOT/debian / zfs default 0 0
gummiboot: boot=zfs rpool=mypool bootfs=rpool/ROOT/debian ro quiet

@FransUrbo
Copy link
Contributor

This also happened to me, when installing the zfs dailies 0.6.3-25 yesterday. The install procedure is the same as described by azeemism.

The problem is the first command failing (the 'mount -o zfsutil …'). The error is the dash… It haven't been able to figure out what the root fs is, so it's substituting it with a dash ('-')…

The pool SHOULD have been imported at that time (or maybe that's the problem, it can't figure out the pool and therefor not the root fs?)…

Instead of running 'zpool import' at that time, try running 'zpool status' and see what it say. Also give me the output of "cat /proc/cmdline".

@raptwa
Copy link

raptwa commented Feb 5, 2015

fullsizerender

after manually importing the pool:
img_3193

@FransUrbo
Copy link
Contributor

That's weird� The kernel command line looks just fine�

But recovering for now, would be something like:

zpool import -d /dev/disk/by-id -N skynet
mount -o zfsutil -t zfs skynet/ROOT/debian /root

I honestly have no idea what the problem might be :(. Could you just run

dmesg | egrep 'SPL:|ZFS:'

to make sure that the module actually is 0.6.3-25? Doesn't need to be in the initrd...

@raptwa
Copy link

raptwa commented Feb 6, 2015

ZFS -> 0.6.3-25
SPL -> 0.6.3-16

fullsizerender

@b333z
Copy link
Contributor

b333z commented Feb 7, 2015

( Your missing the zfs after the -t in your mount command )
On 6 Feb, 2015 11:26 PM, "raptwa" [email protected] wrote:

ZFS -> 0.6.3-25
SPL -> 0.6.3-16

[image: fullsizerender]
https://cloud.githubusercontent.com/assets/10240681/6079288/aa995826-ae03-11e4-8ce3-17a720b462f9.jpg


Reply to this email directly or view it on GitHub
#3047 (comment).

@raptwa
Copy link

raptwa commented Feb 7, 2015

Yeah the 'zfs' was missing here, but it doesn't change the situation on reboot. With manually mounting the pool, as described here, I can boot into the system. But on reboot I have to do it again.
I searched for the zpool.cache file in /etc/zfs but it wasn'T there. Recreated, rebooted but no change...

@raptwa
Copy link

raptwa commented Feb 7, 2015

What works (boot without user interaction):
zpool set bootfs=skynet/ROOT/debian skynet
zfs set mountpoint=/ skynet/ROOT/debian

So the previous setting with bootfs (-), mountpoint=legacy and using the fstab doesn't work for me. It looks like the boot parameter "boot=zfs rpool=skynet bootfs=skynet/ROOT/debian ro quiet" are ignored.

@FransUrbo
Copy link
Contributor

What works (boot without user interaction):
zpool set bootfs=skynet/ROOT/debian skynet
zfs set mountpoint=/ skynet/ROOT/debian

Oh… I thought everyone did that! It's basically required, think it's in all documentation...
So the previous setting with bootfs (-), mountpoint=legacy and using the fstab doesn't work for me. It looks like the boot parameter "boot=zfs rpool=skynet bootfs=skynet/ROOT/debian ro quiet" are ignored.

It shouldn't be, but I have to test this (not setting bootfs and mountpoint)...

@raptwa
Copy link

raptwa commented Feb 7, 2015

I through the zpool option 'bootfs' is overridden by the boot parameter. That should by true, when multiple OS are present on one pool. Or do you always need at least one valid setting for the zpool option bootfs, which can be then overridden by a different boot parameter?
Thanks for the great work and support!!!

@FransUrbo
Copy link
Contributor

I through the zpool option 'bootfs' is overridden by the boot parameter.

As I said, it's supposed to… I have to dig through this a little more, but I'm quite sure it should be ok (technically it SHOULD be ok), but I might not have tested it properly/at all...

@FransUrbo
Copy link
Contributor

Ok, I think I found something... I rewrote the initrds (the idea was to make it more modular and not so 'spagetti like' in 0.6.3-25c944bewheezy). But that might have introduced this issue where bootfs and mountpoint needs to be set..

The reason is that although I save all values I needed, they weren't used later (and was overwritten by faulty code to detect them). I'm going to do some testing of this today (and if someone wants to help, I'll be on the ZoL IRC channel). Should have something later.

@FransUrbo
Copy link
Contributor

New versions for both wheezy and jessie daily (0.6.3-2933b4dewheezy and 0.6.3-2633b4dejessie) uploaded with an almost complete rewrite of the initrd script. Hopefully this should fix this issue. Please test and if not, let me know.

@raptwa
Copy link

raptwa commented Feb 8, 2015

OK, after updating the reboot is not working again.

~# zpool get bootfs skynet
NAME PROPERTY VALUE SOURCE
skynet bootfs skynet/ROOT/debian local

~# zfs list -r skynet
NAME USED AVAIL REFER MOUNTPOINT
skynet 2,41G 24,9G 19K none
skynet/ROOT 2,41G 24,9G 19K none
skynet/ROOT/debian 2,41G 24,9G 2,12G /

img_3222

After importing and mounting manually and exiting twice the system boots. But now even with bootfs property set, it doest't boot without user interaction.

@FransUrbo
Copy link
Contributor

Have a look at /etc/initramfs-tools. is there any files or links named zfs there? Did you update the initrd before trying/rebooting?

Please give me the output of the following command line:

ls -l /etc/initramfs-tools/*/zfs /usr/share/initramfs-tools/*/zfs

From the looks of it, it's still using the old initrd scripts...

@raptwa
Copy link

raptwa commented Feb 14, 2015

The initrd was updated before reboot.

root@skynet:/etc/initramfs-tools# ls -l /etc/initramfs-tools/*/zfs
ls: Zugriff auf /etc/initramfs-tools/*/zfs nicht möglich: Datei oder Verzeichnis nicht gefunden
root@skynet:/etc/initramfs-tools# ls -l /usr/share/initramfs-tools/*/zfs
-rw-r--r-- 1 root root    61 Feb  8 19:24 /usr/share/initramfs-tools/conf-hooks.d/zfs
-rwxr-xr-x 1 root root  2969 Feb  8 19:26 /usr/share/initramfs-tools/hooks/zfs
-rw-r--r-- 1 root root 17541 Feb  8 19:26 /usr/share/initramfs-tools/scripts/zfs

@FransUrbo
Copy link
Contributor

Ok, then we're back to me having no clue :(

@gmelikov
Copy link
Member

Close as stale.

If it's actual - feel free to reopen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Building Indicates an issue related to building binaries
Projects
None yet
Development

No branches or pull requests

6 participants