Manjaro not booting, because it fails to automount ZFS dataset

Hi,

after a few days work, more than I wished it would be, I managed to setup Manjaro on a ZFS root. I used this howtos:

and finally this thread helped as well:

Also many other google results. :slight_smile:

I used systemd-boot, because I was not able to setup Grub correctly. Also, at least I experienced it like this, Grub does not really want to support ZFS.

The system itself is running and booting. On boot the prompt for the ZFS encrypted fro the root pool is shown, but now I created another pool for data like photos, etc. to be mounted and boot, and the boot fails with this error:

...
Mar 28 09:35:20 manjaro systemd[1]: Mounting /data...
β–‘β–‘ Subject: A start job for unit data.mount has begun execution
β–‘β–‘ Defined-By: systemd
β–‘β–‘ Support: https://forum.manjaro.org/c/support
β–‘β–‘ 
β–‘β–‘ A start job for unit data.mount has begun execution.
β–‘β–‘ 
β–‘β–‘ The job identifier is 65.
...
Mar 28 09:35:20 manjaro mount[1181]: filesystem 'dpool_<randomUUID>/DATA' cannot be mounted, unable to open the dataset
...
Mar 28 09:35:20 manjaro systemd[1]: Failed to mount /data.
β–‘β–‘ Subject: A start job for unit data.mount has failed
β–‘β–‘ Defined-By: systemd
β–‘β–‘ Support: https://forum.manjaro.org/c/support
β–‘β–‘ 
β–‘β–‘ A start job for unit data.mount has finished with a failure.
β–‘β–‘ 
β–‘β–‘ The job identifier is 65 and the job result is failed.

I’m mount the ZFS root pool with fstab according to the above mentioned howto.

My fstab looks like this:

# Static information about the filesystems.
# See fstab(5) for details.

# <file system> <dir> <type> <options> <dump> <pass>
rpool_<randomUUID>/manjaro/ROOT/default                       /                       zfs     zfsutil,rw,relatime,xattr,posixacl      0       0
rpool_<randomUUID>/manjaro/DATA/default/home                  /home                   zfs     zfsutil,rw,relatime,xattr,posixacl      0       0
rpool_<randomUUID>/manjaro/DATA/default/home/<username>        /home/sebastian         zfs     zfsutil,rw,relatime,xattr,posixacl      0       0
rpool_<randomUUID>/manjaro/DATA/default/srv                   /srv                    zfs     zfsutil,rw,relatime,xattr,posixacl      0       0
rpool_<randomUUID>/manjaro/DATA/default/root                  /root                   zfs     zfsutil,rw,relatime,xattr,posixacl      0       0
rpool_<randomUUID>/manjaro/DATA/default/var/log               /var/log                zfs     zfsutil,rw,relatime,xattr,posixacl      0       0
rpool_<randomUUID>/manjaro/DATA/default/usr/local             /usr/local              zfs     zfsutil,rw,relatime,xattr,posixacl      0       0
rpool_<randomUUID>/manjaro/DATA/default/var/games             /var/games              zfs     zfsutil,rw,relatime,xattr,posixacl      0       0
rpool_<randomUUID>/manjaro/DATA/default/var/spool             /var/spool              zfs     zfsutil,rw,relatime,xattr,posixacl      0       0
rpool_<randomUUID>/manjaro/DATA/default/var/lib/libvirt       /var/lib/libvirt        zfs     zfsutil,rw,relatime,xattr,posixacl      0       0
#dpool_<randomUUID>/DATA                                      /data                   zfs     zfsutil,rw,relatime,xattr,posixacl      0       0
UUID=4531-AAAB                                          /boot                   vfat    defaults                                0       0
UUID=3c1efb01-0020-4d75-867a-c40b23008bf9               none                    swap    defaults                                0       0

The β€˜dpool_/DATA’ dataset results in a system unable to boot. However, it is no problem to boot it manually later.

Can someone tell me, how to solve this issue?

Greetings
Sebastian

I have no experience with ZFS, but you wrote dpool_<randomUUID>/DATA unlike rpool_<randomUUID>/DATA in /etc/fstab?

1 Like

That is on purpose. It is another pool named dpool_<randomUUID> for data storage. dpool for data pool and the <randomUUID> is just a placeholder for a randomly generated UUID.

Try to mount it manually with sudo zfs mount dpool_<randomUUID>/DATA and see if you get a more meaningful error.

Is it encrypted?

1 Like

Manually mounting works and yes it is encrypted with the property keylocation=prompt.

That is the issue. You are trying to mount a dataset you haven’t unlocked.

The way that is most commonly handled is to use a key file that lives in the encrypted / dataset. Since / gets unlocked by the initramfs, the keyfile will be encrypted until you unlock it.

1 Like

But my root pool is also encrypted, with the same property keylocation=prompt and at boot I’m asked to type in the password. Why does this not work with the other pool?

Because the root pool is unlocked by the initramfs

1 Like

That makes sense. :smiley:

Is it not possible to mount this dataset automatically at boot with initramfs, too?

It is probably possible. I don’t know how easy it would be though as I have never investigated that.

Even if it was, is there a reason you want that? It would probably require entering multiple passwords at boot every time.

You can take a look at the zfs initcpio hook to see how it works. You may need need a customize that hook.

1 Like

I already thought about this, too. Would it be a better solution to encrypt it with a key files and put this key file onto the encrypted root pool?

That is what I do and I am fairly certain it is the common practice in that situation.

I use a systemd service to unlock them(I have 3 zpools) during the boot sequence.

[Unit]
Description=Load encryption keys
DefaultDependencies=no
After=zfs-import.target
Before=home.mount

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/bash -c '/usr/bin/zfs load-key mypool/data'
ExecStart=/usr/bin/bash -c '/usr/bin/zfs load-key myotherpool/data'

[Install]
WantedBy=home.mount
1 Like

Thanks for the hint, that would work, if my root pool would not be encrypted. :slightly_frowning_face:

I can’t load the key from an encrypted dataset, if the service executes before the zfs datasets are mounted.

Is it possible to split up the mount process? Or could I install another service, which specifically loads the key and mounts the other pool? Or should I place the key on the ESP?

If it is running to early, you need to make the service depend on something that comes later. That being said, the root dataset is being unlocked by the initramfs so that should definitely be happening before systemd starts running those services.

You just need to tailor WantedBy After and Before to your specific install. You probably can’t use mine. You may not even have a home.mount

1 Like

How often I said β€œthanks” here, but thanks again. :rofl:

I created a service which executes the key load and mount for the second pool with After=zfs-mount.service.

[Unit]
Description=Load ZFS encryption key and mount dpool
DefaultDependencies=no
After=zfs-mount.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/zfs load-key dpool
ExecStart=/usr/bin/zfs mount dpool/DATA
StandardInput=tty-force

[Install]
WantedBy=zfs-mount.service

P.S.: Before this thread will be closed, why did you use WantedBy=zfs-mount.service? Shouldn’t his be a *.target?

1 Like