-
System information
Adding a pool newpool/newfs prevents booting.STEPS TO RECREATE
Attempts at remediation:
nothing shows up in syslog indicating an error. |
Beta Was this translation helpful? Give feedback.
Replies: 14 comments 1 reply
-
Followed the same guide https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS, and see the same problem on two systems running Ubuntu-18.04. Spend hours trying to figure out the problem, no success. Pools are not imported after a reboot. |
Beta Was this translation helpful? Give feedback.
-
/dev/sdX naming is not persistent across reboots and is not recommended for production pools. It may be causing conflicts preventing pool import. You can manually import using unique by-id names. zpool import poolname -d /dev/disk/by-id/ |
Beta Was this translation helpful? Give feedback.
-
tried that several times, same behavior. Tried also to remove cache file, it was recreated and filled by import command, but the pool is still not imported after reboot. |
Beta Was this translation helpful? Give feedback.
-
Also, Tryed also to change settings in /etc/default/zfs to bypass /etc/zfs/zpool.cache, but it had no effects. |
Beta Was this translation helpful? Give feedback.
-
FWIW, I'm seeing similar behavior on a Debian 10 system. I was moving from one pool to another (some history in #9107), and after getting everything in order I've realised the old pool isn't being imported. I'm still using a cache file, and
upon starting. My I'm going to get rid of the cache file, in favour of explicit import via names in |
Beta Was this translation helpful? Give feedback.
-
Same for me, I resolved it creating a service to import the pool manually:
|
Beta Was this translation helpful? Give feedback.
-
I want to say I'm running into this same issue as well (using the same ZFS on root guide). I also solved it by creating a new systemd service to import the pool, but I'm not sure why it's necessary. I think the reason this is happening is that my
After I import my other pool (named array), it becomes:
However, after I restart, but before I manually import my array pool the cache file is back to what it was. I can't figure out why, but I wonder if when the
I have another install that followed this guide before the I don't know how to edit the |
Beta Was this translation helpful? Give feedback.
-
Same issue as described, it seems that when the rpool is imported it overwrites the zpool.cache file and removes any other pools from the cache.
|
Beta Was this translation helpful? Give feedback.
-
I have same problem on 18.04 |
Beta Was this translation helpful? Give feedback.
-
Same problem on Debian Buster 10.2 followed this guide: zpool.cache does not survive a reboot. |
Beta Was this translation helpful? Give feedback.
-
Same issue here on 19.10. Your workaround kind of works but although my additional pools are imported, it does not appear to be mounting the datasets correctly. I have to run Can I add |
Beta Was this translation helpful? Give feedback.
-
I've hit the same error following Ubuntu 18.04 Root on ZFS how-to, albeit in automated fashion, made with this automated script. After examination of boot sequence and
Next, import any user pool you want, reboot and check the service output via Temporary remediation for this bug would be a simple update to the zfs-import-bpool.service, like this:
With this approach, if zpool.cache is present, it will survive bpool load, and will be taken into account with zpool-import-cache.service, which essentially first loads cache config in-core and then it re-synced back with updated txn ids. As for permanent solution and pull request, i'm a bit lost here, because there is an open ticket for retiring the cache. So, we either can add new command to zpool utility to resync SPA from disk, should boot sequence ever need this, then add it to initrd scripts for zfs appropriate command, or extend spa_write_cachefile function to check on write, whether root system just has been mounted, and if this is the case, first resync to in-core any pools from cachefile residing on rpool, provided pools are non-stalled. |
Beta Was this translation helpful? Give feedback.
-
I can confirm the failure when following https://github.com/openzfs/zfs/wiki/Debian-Buster-Root-on-ZFS. Two pools are not imported at boot, even after poking the cache as described at https://github.com/openzfs/zfs/wiki/FAQ#generating-a-new-etczfszpoolcache-file Note that the version pinning recommended in the first link results in several packages being held back along with the ZFS-related packages. Unfortunately, bad experiences trying to upgrade Debian and having boot fail have me sticking with those recommendations at this time. I have applied the changes indicated by @Andrey42 to I did check the OpenZFS Debian Buster Root on ZFS page prior to the creation of the system in question. Updating that page may prove helpful to others.
|
Beta Was this translation helpful? Give feedback.
-
I experienced a very similar issue (maybe identical). I never observed anything being output to the cache file. The particular disks I was using were cannibalized from an old HP ProLiant server with smart array. Maybe this had something to do with it? I was able to create a pool with the drives as vdevs just fine but re-importing them (even using /dev/disk/by-id) did not work in my case. I also tried recreating the GPT partition table on the drives using gparted, no luck. I was able to fix the problem by wiping the first few gigs of the drive, then recreating everything sudo dd if=/dev/zero of=/dev/disk/by-id/ata-ST2000LM015-2E8174_ZDZ5M9JY bs=4M status=progress After this I am able to import and export many times with no issues. In addition my cache file finally contains data. My theory is that smart array may have put something on the disk that was causing issues. |
Beta Was this translation helpful? Give feedback.
I've hit the same error following Ubuntu 18.04 Root on ZFS how-to, albeit in automated fashion, made with this automated script.
After examination of boot sequence and
module/zfs/spa_config.c
source code, I've found that currently in-core state is always pushed to the disk, there is no disk-to-in-core sync ever made.As a result, after loading initramfs and mounting root filesystem, any pool import even with cachefile=none results in syncing current in-core state to disk, zapping any previous pools, stored before reboot inside /etc/zfs/zpool.cache.
In our particular case for Ubuntu, it happens when zpool-import-bpool.service runs
/sbin/zpool import -N -o cachefile=none bpool
command. You…