Skip to content

HOWTO install Ubuntu 14.04 15.04 to a Native ZFS Root Filesystem

Fajar Arief Nugraha edited this page Jan 8, 2016 · 4 revisions

Note

  • zfs repositories has been upgraded to 0.6.4 which has known incompatibilites with upstream grub. Do NOT run zpool upgrade rpool as it could cause unbootable system
  • if you previously use wheezy ZOL repo, see Upgrading

These instructions were adapted from the original HOWTO with the following difference:

  • Focus on Ubuntu 14.04 (LTS) and later
  • Use an installed Ubuntu environment as base instead of Live CD
    • this can be used later for troubleshooting and rescue purposes
  • Use Ubuntu's bundled grub. For both root and /boot, it supports:
    • latest pool version
    • compression
    • stripe, mirror, and raidzX

System Requirements

  • 64-bit Ubuntu 14.04 or 14.10 installed
  • 8GB free disk/partition available
  • 4GB memory recommended

Tested Versions

  • Ubuntu 14.04 (LTS), 14.10
  • grub-pc 2.02beta2-9ubuntu1 or 2.02beta2-15
  • spl-dkms 0.6.4-2~utopic
  • zfs-dkms 0.6.4-3~utopic
  • zfs-initramfs 0.6.4-3~utopic

Contents

Step 1: Install Ubuntu and zfs

All commands must be run as root.

1.1 Install Ubuntu on hard disk/USB disk, leaving free space for zfs root

Ubuntu installation process is not covered here, and should be pretty straight forward. Just remember to leave free partition for zfs. For a single vdev pool layout, my partition looks like this

# fdisk -l

Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005ab1f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048      999423      498688   82  Linux swap / Solaris
/dev/sda2   *      999424     5095423     2048000   83  Linux
/dev/sda3         5095424    20971519     7938048   83  Linux

sda1 is swap, sda2 is Ubuntu on ext4 (minimal installation, so it uses less than 2GB), while sda3 will be used for zfs root.

If you want zfs-only setup without any other partitions, the easiest way would be to install ubuntu-on-ext4 on a USB disk first, boot from there, and then manually create partitions on the hard disk that will be used by zfs later. Remember to leave some space for grub:

  • If you use MBR, leave extra space before the first partition (parted should already do this automatically, creating first partition starting at sector 2048).
  • If you use GPT, create a small partition on the end for bios grub partition (see raidz2 example later in this howto)

1.2 Install zfs

Recommended: Due to the way zfs uses memory, I recommend you limit the ammount of memory it can use. This example limits zfs arc size to 1 GB, which in practice should limit overall memory used by zfs between 1-2 GB:

# cat /etc/modprobe.d/zfs-arc-max.conf 
options zfs zfs_arc_max=1073741824

Install zfs from ppa

# apt-get install python-software-properties
# add-apt-repository ppa:zfs-native/stable
# apt-get update
# apt-get install ubuntu-zfs
# apt-get install zfs-initramfs

1.3 Load and check the presence of ZFS module

# modprobe zfs
# dmesg | egrep "SPL|ZFS"
[ 1570.790748] SPL: Loaded module v0.6.4-2~trusty
[ 1570.804042] ZFS: Loaded module v0.6.4-3~trusty, ZFS pool version 5000, ZFS filesystem version 5

Step 2: Add udev rule

We need to add a new udev rule due to the way upstream grub (grub-probe, to be exact) fail to resolve device names when using /dev/disk/by-id/* for vdev. This workaround is possibly not needed in the future, but it won't do harm leaving it active. It adds the links that exists in /dev/disk/by-id to /dev, but only for partitions used by zfs. Create /etc/udev/rules.d/90-zfs-vdev.rules with this content:

# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Force ata_id for USB disks
KERNEL=="sd*[!0-9]", SUBSYSTEMS=="usb", IMPORT{program}="ata_id --export $devnode"
# Force ata_id when ID_VENDOR=ATA
KERNEL=="sd*[!0-9]", ENV{ID_VENDOR}=="ATA", IMPORT{program}="ata_id --export $devnode"
# Add links for zfs_member only
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"

If you follow this howto, then the above is the only udev rule you'll need. However, if you modify your setup to use L2ARC or whole disk setup (not currently recommended, and not covered in this howto), you can use one of these udev rules instead:

  • If you use L2ARC, it won't be identified as zfs_member. Use this udev rule instead to add links for all disk and partitions.
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Force ata_id for USB disks
KERNEL=="sd*[!0-9]", SUBSYSTEMS=="usb", IMPORT{program}="ata_id --export $devnode"
# Force ata_id when ID_VENDOR=ATA
KERNEL=="sd*[!0-9]", ENV{ID_VENDOR}=="ATA", IMPORT{program}="ata_id --export $devnode"
# Create links for all disk and partitions. Needed for L2ARC (not covered in this howto)
KERNEL=="sd*[!0-9]", IMPORT{parent}=="ID_*", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}"
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"
  • If you use whole-disk pool (not covered in this howto), zfs will list the disk, but grub will need the partition. So we need to compensate by creating a "fake" disk link pointing to the partition instead.
# Create by-id links in /dev as well for zfs vdev. Needed by grub
# Force ata_id for USB disks
KERNEL=="sd*[!0-9]", SUBSYSTEMS=="usb", IMPORT{program}="ata_id --export $devnode"
# Force ata_id when ID_VENDOR=ATA
KERNEL=="sd*[!0-9]", ENV{ID_VENDOR}=="ATA", IMPORT{program}="ata_id --export $devnode"
# Create links for zfs member partition to the disk. Needed for whole-disk setup
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", ENV{ID_FS_TYPE}=="zfs_member", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}"

Step 3: Create the root pool

3.1 Create the root pool, enabling lz4 compression and ashift=12 if needed

You should use /dev/disk/by-id links to create the pool. As an alternative, you could also create it using /dev/sd*, export it, and import it again using -d /dev/disk/by-id.

Run udevadm trigger afterwards to make sure that the new udev rule is run.

3.1.1 Example for pool with single vdev

Create zpool with only grub-supported features enabled

# zpool create -d -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@lz4_compress=enabled -o ashift=12 -O compression=lz4 rpool /dev/sda3
# zpool export rpool
# zpool import -d /dev/disk/by-id rpool
# zpool status -v rpool
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: none requested
config:

        NAME                                           STATE     READ WRITE CKSUM
        rpool                                          ONLINE       0     0     0
          ata-VBOX_HARDDISK_VB82d42f66-76355b71-part3  ONLINE       0     0     0

errors: No known data errors

# udevadm trigger
# ls -la /dev/*part* | grep sda
lrwxrwxrwx 1 root root 4 Aug  8 13:25 /dev/ata-VBOX_HARDDISK_VB82d42f66-76355b71-part3 -> sda3

3.1.2 Example for pool with raidz2

3.1.2.1 Create the partition table

In this example, 5 disks (/dev/sd[b-f]) will be used by zfs. It will not be used for anything else (e.g. swap, another OS, etc). If you have an existing disk/partition setup, go straight to 3.1.2.2.

We need to create the partition table manually since grub-probe does not support whole-disk pools. On each disk, the first partition will be used by zfs. The second small partition at the end is necessary to prevent zfs from incorrectly detecting the whole disk as vdev. On GPT setup, it will also be used by grub. Setup your disk so it looks like this:

# parted /dev/sdb p
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  10.7GB  10.7GB  zfs          zfs
 2      10.7GB  10.7GB  8372kB               grub  bios_grub

If you want to use GPT label, and your zfs disks are sdb-sdf, this script should create the partition layout above for all the disks. Grub will be installed on the second partition, with bios_grub flag.

# for d in /dev/sd[b-f];do parted -- $d mklabel gpt Y mkpart zfs zfs 1MiB -8MiB mkpart grub zfs -8MiB -0MiB toggle 2 bios_grub;done

If you want to use MBR label, use this script instead. Grub will then be installed on free space before the first partition.

for d in /dev/sd[b-f];do parted --script $d mklabel msdos Y mkpart primary ext3 1MiB -8MiB mkpart primary ext3 -8MiB -0MiB;done

Again, you could always create the partitions manually for each disk if you want, not using the script above.

3.1.2.2 Create the raidz2 pool

From the example above, the vdevs are /dev/sd[b-f]1. Adjust as appropriate if you create your own partitions. The zpool must be created with only grub-supported features enabled

# zpool create -d -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@lz4_compress=enabled -o ashift=12 -O compression=lz4 rpool raidz2 /dev/sd[b-f]1
# zpool export rpool
# zpool import -d /dev/disk/by-id rpool
# zpool status -v rpool
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: none requested
config:

    NAME                                             STATE     READ WRITE CKSUM
    rpool                                            ONLINE       0     0     0
      raidz2-0                                       ONLINE       0     0     0
        ata-VBOX_HARDDISK_VB34e03168-af59f84b-part1  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VB0a394d20-76c87e6a-part1  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VBe51e2eb6-75e186e2-part1  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VBfbf70a2a-d7002bce-part1  ONLINE       0     0     0
        ata-VBOX_HARDDISK_VB9bb2b6fd-2644ae68-part1  ONLINE       0     0     0

# udevadm trigger
# ls -la /dev/*part* | grep sd[b-f]
lrwxrwxrwx 1 root root 4 Aug  8 13:25 /dev/ata-VBOX_HARDDISK_VB0a394d20-76c87e6a-part1 -> sdc1
lrwxrwxrwx 1 root root 4 Aug  8 13:25 /dev/ata-VBOX_HARDDISK_VB34e03168-af59f84b-part1 -> sdb1
lrwxrwxrwx 1 root root 4 Aug  8 13:25 /dev/ata-VBOX_HARDDISK_VB9bb2b6fd-2644ae68-part1 -> sdf1
lrwxrwxrwx 1 root root 4 Aug  8 13:25 /dev/ata-VBOX_HARDDISK_VBe51e2eb6-75e186e2-part1 -> sdd1
lrwxrwxrwx 1 root root 4 Aug  8 13:25 /dev/ata-VBOX_HARDDISK_VBfbf70a2a-d7002bce-part1 -> sde1

3.2 Create the root dataset and copy the original root

# zfs create rpool/ROOT
# zfs create rpool/ROOT/ubuntu
# mkdir /mnt/tmp
# mount --bind / /mnt/tmp
# rsync -avPX /mnt/tmp/. /rpool/ROOT/ubuntu/.
# umount /mnt/tmp

3.3 Edit new fstab, comment out the old root entry

If you still have swap on the same partition, you can leave swap entry enabled. If you use no-swap setup, use an empty fstab.

# cat /rpool/ROOT/ubuntu/etc/fstab
#/dev/sda2 /               ext4    noatime,errors=remount-ro 0       1
/dev/sda1 none            swap    sw              0       0

3.3 Edit new grub config on /rpool/ROOT/ubuntu/etc/default/grub

You might need to comment-out GRUB_HIDDEN_TIMEOUT so you get grub menu during boot. This is needed to be able to select other boot entries.

#GRUB_HIDDEN_TIMEOUT=0

Next, add zfs boot parameter to GRUB_CMDLINE_LINUX

GRUB_CMDLINE_LINUX="boot=zfs rpool=rpool bootfs=rpool/ROOT/ubuntu"

3.4 Generate new grub config, and verify it has the correct root entry

# for d in proc sys dev;do mount --bind /$d /rpool/ROOT/ubuntu/$d;done
# chroot /rpool/ROOT/ubuntu/
# update-grub
# grep ROOT /boot/grub/grub.cfg
	font="/ROOT/ubuntu@/usr/share/grub/unicode.pf2"
		linux   /ROOT/ubuntu@/boot/vmlinuz-3.13.0-32-generic root=ZFS=rpool/ROOT/ubuntu ro boot=zfs rpool=rpool bootfs=rpool/ROOT/ubuntu splash quiet $vt_handoff
		initrd  /ROOT/ubuntu@/boot/initrd.img-3.13.0-32-generic
				linux   /ROOT/ubuntu@/boot/vmlinuz-3.13.0-32-generic root=ZFS=rpool/ROOT/ubuntu ro boot=zfs rpool=rpool bootfs=rpool/ROOT/ubuntu splash quiet $vt_handoff
				initrd  /ROOT/ubuntu@/boot/initrd.img-3.13.0-32-generic
				linux   /ROOT/ubuntu@/boot/vmlinuz-3.13.0-32-generic root=ZFS=rpool/ROOT/ubuntu ro recovery nomodeset boot=zfs rpool=rpool bootfs=rpool/ROOT/ubuntu
				initrd  /ROOT/ubuntu@/boot/initrd.img-3.13.0-32-generic
# exit
# for d in proc sys dev;do umount /rpool/ROOT/ubuntu/$d;done

3.5 (Optional) Test boot from existing grub installation

This is to make sure that your root fs, initrd, and grub config file is already setup. In case something goes wrong in this stage, you will still boot Ubuntu on ext4 by default.

  • Reboot
  • Press c on grub menu for a command line
  • Load gpt/mbr grub module. This is only necessary if your current partition label is of different type from your pool (e.g. your ext4 is on MBR disk while your pool is on GPT disk)
  • Load zfs module
  • Load the grub config file on the zfs root.
    • Note that:

      • Pool name does not matter in this case, only vdev name and dataset name matters
      • You can use Tab for file name completion if you don't remember the partition numbers or file names
    • Example for single vdev pool, mbr, zfs on /dev/sda3

grub> insmod part_msdos grub> insmod zfs grub> configfile (hd0,msdos3)/ROOT/ubuntu@/boot/grub/grub.cfg ```

- Example for raidz2 pool, gpt, with ``/dev/sdb1`` as one of the vdevs

	```

grub> insmod part_gpt grub> insmod zfs grub> configfile (hd1,gpt1)/ROOT/ubuntu@/boot/grub/grub.cfg ```

  • It will display new grub menu, press Enter to boot the first entry
  • See Step 4: verify you're on zfs root to verify that it actually works
  • Reboot, then proceed to Step 3.6.

3.6 Install the new grub

3.6.1 Example for pool with single vdev

Following the previous single vdev example, /dev/sda3 is the vdev, and grub will be installed on /dev/sda.

# grub-install --boot-directory=/rpool/ROOT/ubuntu/boot /dev/sda

If you correctly follow step 3.4, you'll notice that at this point you are outside of chroot, back to your ext4 installation. The parameter --boot-directory=/rpool/ROOT/ubuntu/boot is mandatory, as it is used by grub to determine $prefix during boot. Not specifying --boot-directory, or specifying --boot-directory=/boot will use your current ext4 installation for grub prefix, which is not what you want.

There are other methods to install grub, which includes doing it from inside of chroot. Those methods are not covered in this howto.

3.6.2 Example for pool with raidz2

Following the previous raidz2 example, /dev/sd[b-f]1 is the vdev, and grub will be installed on on all disks (/dev/sd[b-f]).

# for d in /dev/sd[b-f];do grub-install --boot-directory=/rpool/ROOT/ubuntu/boot $d;done

3.7 Remove zpool.cache from ext4 root

The presence of zpool.cache can speed up pool import, but it can also cause problems when the pool layout has changed.

# rm /etc/zfs/zpool.cache
# update-initramfs -u

3.8 Reboot

# init 6

3.9 (Optional) Choose which disk to boot

If you previously use USB disk for ext4 installation, or following the raidz2 example, at this point you need to configure BIOS to boot from the zfs disk. On some systems you can choose which disk to boot during the boot process (e.g. pressing F12 on virtualbox).

Step 4: Verify you're on zfs root

Make sure the loaded kernel is from zfs, and root parameter also points to zfs

# cat /proc/cmdline 
BOOT_IMAGE=/ROOT/ubuntu@/boot/vmlinuz-3.13.0-32-generic root=ZFS=rpool/ROOT/ubuntu ro boot=zfs rpool=rpool bootfs=rpool/ROOT/ubuntu splash quiet vt.handoff=7

Make sure current root is on zfs

# df -h /
Filesystem         Size  Used Avail Use% Mounted on
rpool/ROOT/ubuntu  7.5G  823M  6.7G  11% /

(Optional) check how effective the current compression is

# zfs get compression,compressratio,used,logicalused rpool/ROOT/ubuntu
NAME               PROPERTY       VALUE     SOURCE
rpool/ROOT/ubuntu  compression    lz4       inherited from rpool
rpool/ROOT/ubuntu  compressratio  1.91x     -
rpool/ROOT/ubuntu  used           824M      -
rpool/ROOT/ubuntu  logicalused    1.17G     -

Step 5: Create grub entry for the old ext4 root

Previous update-grub inside chroot does not create grub entry for the ext4 environment, so run update-grub again after you boot to zfs root

# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.13.0-32-generic
Found initrd image: /boot/initrd.img-3.13.0-32-generic
Found Ubuntu 14.04.1 LTS (14.04) on /dev/sda2
done

At this point you should also remove zpool.cache from ubuntu root. The presence of zpool.cache can speed up pool import, but it can also cause problems when the pool layout has changed.

# rm /etc/zfs/zpool.cache
# update-initramfs -u

(Recommended) Step 6: Create snapshot

The snapshot can be used as rescue environment

Note: boot from snapshot feature is currently unavailable ini Ubuntu ppa. I'm working to port this capability from ZOL's debian zfs-initramfs

# apt-get clean
# zfs snapshot rpool/ROOT/ubuntu@zfsroot

(Recommended) Step 7: Create chooser menu for multiple zfs root

This step is useful if you have multiple distro versions on the same server, or if you want to easily access recovery snapshot (more info on Troubleshooting section). For example, consider this setup

# zfs list -r -t all
NAME                        USED  AVAIL  REFER  MOUNTPOINT
rpool                      1.57G  18.0G   152K  /rpool
rpool/ROOT                 1.56G  18.0G   152K  /rpool/ROOT
rpool/ROOT/ubuntu           742M  18.0G   740M  /rpool/ROOT/ubuntu
rpool/ROOT/ubuntu@zfsroot  2.04M      -   740M  -
rpool/ROOT/utopic           853M  18.0G   873M  /rpool/ROOT/utopic

grub on each zfs dataset will only update its own list of kernels. To be able to switch between distro versions easily, you need to create a separate grub chooser. This example creates such menu on /rpool/grub.

  • Create the dataset

zfs create rpool/grub

```
  • Create /rpool/grub/grub.cfg with this content

set timeout=5

menuentry 'Ubuntu Trusty boot menu' { configfile /ROOT/ubuntu/@/boot/grub/grub.cfg }

menuentry 'Ubuntu Trusty (snapshot)' { linux /ROOT/ubuntu/@zfsroot/boot/vmlinuz-3.13.0-32-generic root=ZFS=rpool/ROOT/ubuntu@zfsroot ro boot=zfs initrd /ROOT/ubuntu/@zfsroot/boot/initrd.img-3.13.0-32-generic }

menuentry 'Ubuntu Utopic (dev) boot menu' { configfile /ROOT/utopic/@/boot/grub/grub.cfg } ```

  • Tell grub to use the new menu

grub-install --boot-directory=/rpool /dev/sda

```
  • Reboot, and you will get this menu
              GNU GRUB  version 2.02~beta2-9ubuntu1
    
    

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃*Ubuntu Trusty boot menu ┃ ┃ Ubuntu Trusty (snapshot) ┃ ┃ Ubuntu Utopic (dev) boot menu ┃ ┃ ┃ ```

  • When you select an entry, it will display the "normal" grub menu on that particular dataset
              GNU GRUB  version 2.02~beta2-9ubuntu1
    
    

┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃*Ubuntu ┃ ┃ Advanced options for Ubuntu ┃ ```

(Optional) Step 8: Update grub to allow auto-generated zfs boot menu

See grub.cfg, zfsroot.lua, and Grub Lua ppa

By default, during package installation from ppa grub will automatically install to your boot drive, using your current /boot/grub directory. You'll need -to re-run grub-install --boot-directory=/rpool /dev/sda to get it to use /rpool/grub directory again.

Resulting boot menu:

Grub with auto generated zfs boot menu

Troubleshooting

Boot from snapshot (currently only applicable for users of 0.6.3 with ZOL's repo)

This should be your primary means of rescue environment. The prequisites are:

  • use ZOL's repo
  • follow Step 6 to have a snapshot of your working setup

To boot from snapshot, you have several methods

  • If you follow Step 7 and 8, you can select one of the Boot linux ... on ...@... boot entries.

  • If you follow Step 7 only, you can select the boot entry with snapshot.

  • If you don't follow those steps, you can edit a "normal" grub entry manually:

    • On grub boot menu, press e to edit

    • Add the snapshot name to kernel, initrd, and root part

    • Remove extra parameters (e.g. quiet) if necessary. For example, using the snapshot we created earlier (@zfsroot), the edited entry would be like this:

linux /ROOT/ubuntu/@zfsroot/boot/vmlinuz-3.13.0-32-generic root=ZFS=rpool/ROOT/ubuntu@zfsroot ro boot=zfs initrd /ROOT/ubuntu/@zfsroot/boot/initrd.img-3.13.0-32-generic ```

* Press ``F10`` to boot. 

zfs-initramfs script will automatically clone that snapshot and boot from it, replacing @ from the snapshot name with _

# df -h /
Filesystem                  Size  Used Avail Use% Mounted on
rpool/ROOT/ubuntu_zfsroot  7.5G  823M  6.7G  11% /

Note that if the cloned dataset (in this case rpool/ROOT/ubuntu_zfsroot) already exists, the script will destroy and recreate it.

Boot from Ubuntu on ext4

Use this if you have a broken kernel/initrd (e.g. from bad upgrade), and don't have a good snapshot. On grub boot menu, select the appropriate entry from the list

			   GNU GRUB  version 2.02~beta2-9ubuntu1

 ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
 ┃ Ubuntu                                                         ┃ 
 ┃ Advanced options for Ubuntu                                    ┃
 ┃*Ubuntu 14.04.1 LTS (14.04) (on /dev/sda2)                      ┃
 ┃ Advanced options for Ubuntu 14.04.1 LTS (14.04) (on /dev/sda2) ┃

Boot from Live CD to Ubuntu on ext4

This step should only be necessary as a last resort if you can't even get to grub menu at all. Use any Live CD with grub2 (e.g . Super Grub2 Disk, a 12MB ISO image)

  • Boot from CD
  • Press c for a command line
  • Type the necessary kernel and initrd parameters manually
  • Note that you can use "Tab" for file name completion if you don't remember the partition numbers or file names

For example, if your ext4 is on /dev/sda2

grub> linux (hd0,msdos2)/boot/vmlinuz-3.13.0-32-generic root=/dev/sda2
grub> initrd (hd0,msdos2)/boot/initrd.img-3.13.0-32-generic
grub> boot

From the booted environment you can then check and fix grub on your disk (e.g. recheck steps 3.4 & 3.5)

Grub doesn't display boot menu, or it shows error "unknown filesystem"

Most likely grub can't read the zfs pool. One possible cause is that you previously offline some vdev (zpool offline ...). Possible fixes:

  • Remove/disconnect the offline disks
  • Using Live CD method above, boot from your ext4 installation, and either
    • online the disks, or
    • remove the offline disk, and add them again

Upgrading

If you previously run 0.6.3 from ZOL debian repo, use these steps to upgrade to 0.6.4 from ubuntu ppa

Disable old ZOL repository, and enable ppa

# rm /etc/apt/sources.list.d/zol-wheezy.list
# add-apt-repository ppa:zfs-native/stable
# apt-get update

Adjust /etc/default/grub, adding rpool and bootfs parameter

GRUB_CMDLINE_LINUX="boot=zfs rpool=rpool bootfs=rpool/ROOT/ubuntu"

Install new versions

# apt-get install ubuntu-zfs zfs-dkms zfsutils spl spl-dkms libnvpair1 libuutil1 libzfs2 libzpool2
# apt-get install zfs-initramfs 

Check that all modules are compiled successfully

# dkms status
spl, 0.6.4, 3.16.0-33-generic, x86_64: installed
zfs, 0.6.4, 3.16.0-33-generic, x86_64: installed