[Solved] Manjaro 16.06.01 ZFS root installation problem

I’ve recently wanted to update one of my manjaro installations, when I bumped into this problem. The update failed, and rendered my system unusable after a reboot, so I decided to make a clean reinstall again with ZFS root. I’ve reused the partitions, but made new pools, and followed the arch ZFS guide, but I stuck after the newly installed system (in chroot) was unable to generate a grub menu. The error was the following:

grub-probe: info: cannot open `//boot/grub/device.map’: No such file or directory.


grub-probe: error: failed to get canonical path of `/dev/ata-Maxtor_6B300S0_B60TP9WH-part3

When I created a link to the partition with:

ln -s /dev/sda3 /dev/ata-Maxtor_6B300S0_B60TP9WH-part3

the update-grub executed without errors, but at the next boot the system was unable to find the real root device:

filesystem ’ ’ cannot be mounted, unable to open the dataset
ERROR: Failed to mount the real root device.

The strange part is that I was able to log into the system using the switch_root. I’ve created a virtual test system with the closest settings to my HW, and I’ve experienced the same error. I’ve concluded a lot of tests, downgraded grub, none of my ideas produced a usable system.
I’m kinda stuck, can someone help me out here?
I’ve got two other systems with ZFS root, I’ve Installed them the same way, and they are working flawlessly.
I’ve used the following commands for the install:

#Installed the system onto an USB drive, then
sudo pacman -Syu
sudo pacman -S manjarozfs zfs mc rsync linux-headers manjaro-tools arch-install-scripts
sudo modprobe zfs
cfdisk /dev/sda
#512k boot
#50GB root type bf
ls -lh /dev/disk/by-id/
sudo zpool create -f -o ashift=12 zroot /dev/disk/by-id/ata-Maxtor_6B300S0_B60TP9WH-part3
sudo zfs create -o mountpoint=/home zroot/home
sudo zfs umount -a
sudo zfs set mountpoint=/ zroot
sudo zfs set mountpoint=/home zroot/home
sudo zfs create -V 2G -b $(getconf PAGESIZE)
-o compression=off
-o primarycache=metadata
-o secondarycache=none
-o sync=always
-o com.sun:auto-snapshot=false zroot/swap
sudo mkswap /dev/zvol/zroot/swap
sudo zpool set bootfs=zroot zroot
sudo zpool export zroot
sudo zpool import -d /dev/disk/by-id -R /mnt zroot
sudo zpool set cachefile=/etc/zfs/zpool.cache zroot
sudo mkdir -p /mnt/boot
sudo mount /dev/sda1 /mnt/boot
sudo pacman-mirrors -g
sudo pacstrap -i /mnt manjaro-system mc manjarozfs zfs manjaro-tools
sudo genfstab -U /mnt >> /mnt/etc/fstab
# edited /mnt/etc/fstab (removed the zpool drives)
sudo cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache
sudo manjaro-chroot /mnt /bin/bash
export PS1="(ZFS root) $PS1"
# from the chroot
pacman -Syu
pacman -S manjarozfs zfs mc rsync linux-headers manjaro-tools arch-install-scripts grub
# edited the mkinitcpio hooks: "…block keyboard zfs filesystems,"
mcedit /etc/mkinitcpio.conf
systemctl enable zfs.target
modprobe zfs
echo options zfs zfs_arc_min=268435456 >>/etc/modprobe.d/zfs.conf
echo options zfs zfs_arc_max=1073741824 >>/etc/modprobe.d/zfs.conf
mkinitcpio -P linux
# went without any error:
grub-install --target=i386-pc --boot-directory=/boot --recheck --debug --force /dev/sda
# failed to find the caninical drive:
# after the link, it was created the config without any problem:
ln -s /dev/sda3 /dev/ata-Maxtor_6B300S0_B60TP9WH-part3
# changing the grub.cfg
# 'root=ZFS=Zroo’t\ was changed to ‘zfs=zrooz’
# needed a link because of the grub
ln -s /dev/sda3 /dev/ata-Maxtor_6B300S0_B60TP9WH-part3
sudo zpool set bootfs=zroot zroot
sudo umount /mnt/boot
sudo zfs umount -a
sudo zpool export zroot
sudo reboot

In the meantime I’ve tried a lot of things to make this right. Starting with a more “Manjaro way” of the installation. At the end I was able to reinstall the system with following the steps of a former post in the old forum (Root ZFS on luks encrypted partitions Manjaro KDE). Thanks rjonasz for that! :clap: :sunglasses: :wink:

I’ve done a step-by-step guide about the installation in case someone needs it in the future:

Manjaro (16.06.1 KDE) ZFS root installation

  1. Install Manjaro to a USB attached drive (use a single partition, and install it without swap). GUI installer will do fine here. :slight_smile:
  2. Add ZFS support to the newly installed Manjaro system:
  • Check the running kernel with:
    sudo uname -r
    sudo pacman-mirrors -g
    sudo pacman -Syu
  • Select all ZFS packages from the virtual package, and select the appropriate kenel header from the list after executing:
    sudo pacman -S manjarozfs linux-headers manjaro-tools-base mc
  • Check that ZFS is loaded with (if no error received, than it is fine :slight_smile: ):
    sudo modprobe zfs
  1. Partition the HDD/SDD where you wanted the ZFS root in the first place (here I am using MSDOS partition table):
    cfdisk /dev/<your drive ID>
    (cfdisk /dev/sda in my case)
  • Create a 512MB or 1GB Primary partition (depends on how many kernels are you using), and set it to active
  • Create a desired size Primary partition, and select type BF(Solaris boot)
    (SWAP is going to be on the ZFS root partition, so others not needed)
  • Write out the new partition table.
    (You can use other partition structure, but the boot partition must not be ZFS)
  • Create an EXT filesystem on the boot partition with:
    mkfs.ext4 /dev/<disk and partirion ID>
    (It was mkfs.ext4 /dev/sda1 in my case)
  1. Check your drives, we will need the ID for the ZFS Pool creation…
    ls -lh /dev/disk/by-id/
    (You will see something similar)
    lrwxrwxrwx 1 root root 9 aug 3 19.55 ata-HL-DT-ST_DVDRAM_GH22NS70_K1ZB9QA5921 -> ../../sr0 lrwxrwxrwx 1 root root 9 aug 3 22.40 ata-WDC_WD10EZRX-00L4HB0_WD-WCC4J2540102 -> ../../sda lrwxrwxrwx 1 root root 10 aug 3 22.40 ata-WDC_WD10EZRX-00L4HB0_WD-WCC4J2540102-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 aug 3 22.40 ata-WDC_WD10EZRX-00L4HB0_WD-WCC4J2540102-part2 -> ../../sda2
  • Carefully select the second partition ID (If you are unfamiliar with the partition structure you can use the KDE Partition manager [partitionmanager] to make sure you have selected the right drive/partition - in my case it is the one below):
  1. Create the ZFS root partition using the following command, and set the :
    sudo zpool create -f -o ashift=12 zroot /dev/disk/by-id/<your drive ID>
    (I used the following command: sudo zpool create -f -o ashift=12 zroot /dev/disk/by-id/ata-WDC_WD10EZRX-00L4HB0_WD-WCC4J2540102-part2)
  • Set the mountpoints, ignore the mounting errors (I used a separate mountpoint for the /opt and /home, but it is not needed, do what suits you best - remember /usr and /var needs special attention, so they are not separated in my setup)
    sudo zfs create -o mountpoint=/home zroot/home sudo zfs create -o mountpoint=/opt zroot/opt sudo zfs set mountpoint=/ zroot

  • Create the SWAP section on ZFS root with (this will adjust the correct page size according to your drive, and make sure that swap will not be part of an automatic snapshot):
    sudo zfs create -V 2G -b $(getconf PAGESIZE) \ -o compression=off \ -o primarycache=metadata \ -o secondarycache=none \ -o sync=always \ -o com.sun:auto-snapshot=false zroot/swap

  • Umount all the ZFS pools, and export them:
    sudo zfs umount -a
    sudo zpool export zroot

  • Reboot and start the computer from the LiveDVD for the data copy

  1. Using the Live DVD copy the data from the USB attached drive to the ZFS pool:
  • Install the ZFS support to the system running from the LiveDVD as described above…
    sudo pacman -S manjarozfs linux-headers manjaro-tools-base mc
  • Check that ZFS is loaded with (if no error received, than it is fine :slight_smile: ):
    sudo modprobe zfs
  • Import the pool created above, and make sure it is mounted under /mnt with (you don’t need edit this command in case you named your pool zroot):
    sudo zpool import -d /dev/disk/by-id -R /mnt zroot
  • Check that is mounted under /mnt:
    ls -l /mnt
  • If you see the /mnt/home and the /mnt/opt folders than you are good to go…
  • now we need to mount our wannabe boot partition also, but we need to make it’s dir first on the ZFS root partition:
    sudo mkdir -p /mnt/boot
  • now we can continue mounting our EXT boot drive there:
    sudo mount /dev/<your drive and partition ID> /mnt/boot
    (In my case it was sudo mount /dev/sda1 /mnt/boot)
  • The next step we need is to mount the source material (the installed system on the USB drive) into the /media mountpoint:
    sudo mount /dev/<drive and partition ID> /media
    (It was sudo mount /dev/sdb1 /media in my case)
  • Then copying the installed system can begin with the following commands:
    cd /media tar cfp - . | ( cd /mnt//; tar xvfp -)
    the copy will take a while… after it is finished you can exit from su:
  1. Configure the “Installed system” before the first boot:
  • Chroot to the system under /mnt:
    sudo manjaro-chroot /mnt /bin/bash
  • help yourself to identify the system under chroot with changing the prompt:
    export PS1="(ZFSRoot) $PS1"
  • Select the closest mirrors with:
    pacman-mirrors -g
  • Look for updates:
    pacman -Syyu
  • Create the proper SWAP:
    mkswap /dev/zvol/zpool/swap
  • enable the swap:
    swapon /dev/zvol/zroot/swap
  • edit the mkinitcpio hooks, add zfs. Make sure that zfs is before filesystems, and keyboard is before zfs, you may need to add fsck because of the boot partition:
    mcedit /etc/mkinitcpio.conf
    (In my case the end result was: 'HOOKS="base udev autodetect modconf block keyboard zfs filesystems fsck')
  • check your partitions UUID with:
  • boot drive needs to be added also to /etc/fstab, and the previous should be commented out:
    • Comment out the line in /etc/fstab with /boot in the line inserting a ‘#’ character in the line start
      (on my system:
      # <file system> <mount point> <type> <options> <dump> <pass>
      # UUID=b11fdcfe-2643-49f7-aefe-5cf7409209ef / ext4 defaults,noatime 0 1
    • Then add the new boot partition UUID to the fstab:
      echo #boot >>/etc/fstab
      echo UUID=<your bootdrive UUID> /boot ext4 rw,relatime,data=ordered 0 2 >>/etc/fstab
      (The actual command for me was: echo #boot >>/etc/fstab
      echo UUID=c59072fc-bb0e-4a56-ac43-4cdc9e82608d /boot ext4 rw,relatime,data=ordered 0 2 >>/etc/fstab)
  • swap also needs to be added into fstab to achive the mounting at startup:
    echo #SWAP on ZFS >>/etc/fstab
    echo /dev/zvol/zroot/swap none swap defaults 0 0 >>/etc/fstab
  • Now we can make the real SWAP out of zroot/swap
    mkswap -f /dev/zvol/zpool/swap
  • So, with all the goodees packed up in the fstab file we can continue making the initramfs with ZFS installed:
    mkinitcpio -p linux
  • (Re)install grub with:
    grub-install --target=i386-pc --boot-directory=/boot --recheck --debug --force /dev/<your device ID>
    (At my end: grub-install --target=i386-pc --boot-directory=/boot --recheck --debug --force /dev/sda )
  • Try to update grub (most probably it will fail to detect your canonical device… -this may disappear with a grub update after some time-):
    • If you experience the same problem as me, you need to make a link to the proper device. This way you will help grub to find it :smile: :
      sudo ln -s /dev/<device id and partition ID> /dev/disk/by-id/<device id and partition ID>
      (My command was: sudo ln -s /dev/ata-WDC_WD10EZRX-00L4HB0_WD-WCC4J2540102-part2 /dev/disk/by-id/ata-WDC_WD10EZRX-00L4HB0_WD-WCC4J2540102-part2)
    • retry the grub update with:
  • After all this we need to edit /boot/grub/grub.cfg, because unfortunately grub-probe fails to generate a proper config file, we need to help it a little with the following one liner:
    sed -i -e 's/root=ZFS=zroot\//zfs=zroot/g' /boot/grub/grub.cfg
    After all this fuss we are almost ready to boot into our newly created ZFS root based Manjaro system, only one optional steps are remaining:
    • Restrict the memory use of ZFS by adding a minimum/maximum size to the ARC cache (in this case 256MB is the minimum, and the maximum is 1GB):
      echo options zfs zfs_arc_min=268435456 >>/etc/modprobe.d/zfs.conf
      echo options zfs zfs_arc_max=1073741824 >>/etc/modprobe.d/zfs.conf
      After this a rebuild of initramfs is needed:
      mkinitcpio -p linux
  • Now we need make sure, that only the needed zfs pools are loaded at boot by the system:
    zpool set cachefile=/etc/zfs/zpool.cache zroot
  • Enable zfs pool loading (automounting) under systemd, and the auto generation in DKMS:
    sudo systemctl enable zfs.target sudo systemctl enable dkms.service
    By this time our system is ready and operational, we only need to do some steps before the reboot :slight_smile:
  • exit the chroot:
  • umount USB drive mounted under /media:
    cd \
    sudo umount /media
  • umount the boot partition mounted under our ZFS pool’s boot dir:
    sudo umount /mnt/boot
  • umount all ZFS pools:
    sudo zfs umount -a
  • export the ZFS pool (this will make sure that it can be imported at boot):
    sudo zpool export zroot
  1. Now it is time to test out the “Installed” Manjaro Linux on top of a ZFS root filesystem. Reboot the computer to start the new system:
    sudo reboot

#ZFS Maintenance:

  • When a Kernel update is received, you will need to rebuild the initramfs, then update-grub, and last but not least fix grub config:
    sudo mkinitcpio -p linux
  • Try to update grub (most probably it will fail to detect your canonical device… -this may disappear with a grub update after some time-):

    • If you experience the same problem as me, you need to make a link to the proper device. This way you will help grub to find it :smile: :
      sudo ln -s /dev/<device id and partition ID> /dev/disk/by-id/<device id and partition ID>
      (My command was: sudo ln -s /dev/ata-WDC_WD10EZRX-00L4HB0_WD-WCC4J2540102-part2 /dev/disk/by-id/ata-WDC_WD10EZRX-00L4HB0_WD-WCC4J2540102-part2)
    • retry the grub update with:
  • Fix the grub.cfg:
    sed -i -e 's/root=ZFS=zroot\//zfs=zroot/g' /boot/grub/grub.cfg

  • To make sure that your system stays healthy, setup a ZFS pool scrub. This will automatically hunt for errors inside your pools. If you are using ECC compatible RAM modules most probably you will not have any problems, but the devil is not sleeping… :imp:
  • Scrubbing manually:
    sudo zpool scrub <poolname>
    Status info:
    sudo zpool status -v <poolname>
    (You can use crontab to create a scheduled scrub, there are tons of scripts out there)

I have learned a lot of things during the last couple of days making this guide. I hope it will also help someone else! :slight_smile:


A year went by, do you think ZFS support got better? I really wanted to use it in a production desktop, but how is its support in Manjaro?

Maybe we should add/modify update-grub hook that would create a proper configuration file for zfs? Needing to manually edit configuration after every kernel update is not good.

1 Like

I do not use zfs nor do I understand sed command above.
But i can guess .

What if we sym-link the kernels to /boot/vmlinuz-manjaro and /boot/initramfs-manjaro.img and put the grub entry into custom.cfg so we don’t need to regenerate grub.cfg each time (but need to regenerate sym-links).

menuentry "Manjaro zfs"   {
    insmod part_gpt
    insmod part_msdos
    insmod zfs
    search --no-floppy --fs-uuid --set=root xxxxxxxxxxxxxx
    linux /boot/vmlinuz-manjaro zfs=zroot rw
    initrd /boot/intel-ucode.img /boot/initramfs-manjaro.img

One more thing, ‘zfs’ needs to be in HOOKS line of /etc/mkinicpio.conf (before filesystem)

1 Like

Nothing changed so far, I’m doing this before every update:

  • Snapshot from the root system before the update
  • Do the update, fix the configs
  • Check everything after a reboot

None of my rollbacks happened because of an update issue. Mostly I’ve tried out something, and I was to lazy to revert it… :slight_smile:

Some of the Arch derivations are in a path of giving you the choice to use ZFS as the main FS, it seems Manjaro will be one of them. I’ve got a 7TB disk for the HTPC that uses ZFS, and a 1TB in my desktop, disk performance is not killer, but the data is safe.

Yes, that would be much more safer that way. I was not so educated in GRUB to do this. The main problem is, that even it detects ZFS properly it writes a false config (ZFS in uppercase is not ok during boot in the config. Only “zfs” is working right, so lowercase is needed here.)

The sed only changes the wrong config of the GRUB update command:
From root=ZFS=poolname to zfs=poolname

Yes it is there in the second and first post, it is mandatory:

We could probably just find the line that sets that value and contribute a patch to upstream. Unless some older version of zfs actually requires the uppercase argument. In that case we could patch it to our package, grub is a ooverlay packagage already

There seems to be an old issue about it already here:

So we would need line 66 here

to read


instead of
Is this correct?

@dib would you mind testing if editing this file thus makes update-grub produce right configuraitons? The file is /etc/grub.d/10_linux

We might not get much luck including that upstream, since uppercase seems to work for Debian guys.

1 Like

Exactly… But I can only test it after work.

I’ll report you back, when I concluded testing…

Precisely. So having a ‘fixed’ custom.cfg with correct sym-linked kernels would not need redoing grub.cfg.
That’s my point.

So i do not understand why we need to [quote=“dib, post:2, topic:4469”]
When a Kernel update is received, you will need to rebuild the initramfs,

Do we need to rebuild initramfs?

1 Like

I think the pacman hooks do this + update-grub in any case after updating kernels

1 Like

I’m not fully aware of every bit of the Linux boot process, so pardon me if I say something wrong, and correct me, thanks! :wink:

Ok I understand now, this could work. It is only a static config, pointing to a dev path…

This is needed for the zfs automout, if you do not include this, the pools won’t be mounted automatically (at least not when I reinstalled my systems last time). Like /var is needed for the logging.
I’m now not entirely sure it is needed if you use the fstab mount method. But I relied on older documentation maybe it is not a must now. If you are not thinking about a ZFS root, than it is probably not big concern, or filesystems is aware of zfs, I included it because of mkinitcpio. We need to test this out…

Not if we fix GRUB issues. But we must make sure that spl-dkms is updates during the initial rebuild, before zfs-dkms. Can it be forced out somehow?

Yes it does, but if you are using more than one kernekl, the spl-dkms module will be updated only under the main one, and the other builds will fail on zfs-dkms.

1 Like

One more question, dib. It is not directly related to the issue at hand but can help us here understand zfs. Hope you don’t mind.

zfs file system looks like a very good way to keep data backed up and ‘uncorrupted’ (sorry, sometimes my english fails, rare but it happens). What do you find good about zfs being the OS filesystem? Your comments will be good.


No problem, your english is fine, mine also lacks some higher “education” :slight_smile:

To partly answer your question, I decided using ZFS instead of BTRFS because I had a BTRFS corruption and that time (2014 as I remember) I was lack of the tools to fix the issue, I spent 3 hours searching for the same problem as mine, but none of the available tools fixed it. The low-level sector check showed nothing, the memory check also went fine a couple of times. I decided to try out things, before I keep the OS installed. So the HTPC got a zfs pool for the data, and I tried some things out, it seemed to me it is more mature than BTRFS that time. And when I switched to Manjaro I wanted to use it as a system FS to, mainly because of the easy snapshots/rollbacks. These are easy as pie, I don’t need to create any subvolume for the snapshots either like with the BTRFS way. Ok, I feel disk performance loss, but it is mainly a desktop and a family PC. My goal is now to maintain/build a stable system for the Quemu/KVM based VirtIO enabled (I’m redirecting the dVGA to the guest, and guests using a separated disk in raw mode) system, what can be used for my projects (even during my kids playing on the VM using the Steam link).
So I like to make snapshots when I feel I will broke something (VGA driver /Wayland testing), etc. and this way I can easily fix all my errors during testing. This plays less in my game as I was able to redirect the dVGA to the guest, but makes me more calm when I need to update the system.

At the next fresh install I might try out BTRFS again, who knows? :wink:

Thanks! That’s a very good answer. Appreciate it.
Good luck on your further ‘adventures’ in zfs.
Keep us posted. We can all learn a few things.


1 Like

Thanks, I will. But to tell the truth the only adventure with zfs is the planning phase and the excitement during the install, after that it is mostly boring:smiley:
One other cool feature of a “snapshotable” system is I can browse/open file versions inside the snapshots, and last but not least the bootable snapsot is also a great thing. I’ll test out some GRUB things when I get home, and the kids are properly “handled” :smiley:

OK, I tested it, and it works great, but I had to change from:


“zfs=${rpool}${bootfs}” in /etc/grub.d/10_linux under line 66 (in the stable branch).

But the symlink is needed because of the grub-probe “error”:

Also thinking about ZFS as ‘root’ and media storage file system? How is support right now? For rolling distro have ‘snapshot’ quite a good idea.

For this, btrfs is easier choice on linux.

afaik you don’t need to compile anything, but there is zero automation for zfs in the installers, so it is comparable to the support on arch linux.

I've not commented a long time (actually I was busy doing things offline for a while). Sorry for the necroposting. :zipper_mouth_face:
I've had a pool corruption because of a broken ZFS kernel module and my stupid anxiety (multiple restarts because of a system looks frozen).


6 TB of data was on an older version of a ZFS pool inaccessible for me for a year with only partial backups. I even bought an identical drive just to secure the corrupted data and testing the recovery methods. My data pool was impossible to import, all my efforts to delete the last writes or find the time in the cache file, that can be imported successfully ended with a failure. I even tried to go rough, and delete some uberblocks, but luckily the python script was old, and I wasn't able to fix it entirely. Then came the idea to look after some data recovery products, and I found the solution: UFSexplorer, Recovery explorer. Unfortunately only the Professional versions can handle ZFS. But I was able to recover all my stored data :slight_smile:.

I've switched the root to BTRFS and I'm using Timeshift for the snapshot/restore. Manjaro Architect installer and the GUI installers (I'm sure KDE does) are doing you the favor creating the subvolumes ('@' and '@home' like in Ubuntu) for you, even the encryption if it is your desire. :wink: I've rolled back a couple of times without any problems, and because of the native support I don't need the grub "magic" anymore.

But all of my data is stored in ZFS, now I know what should I do in case of a problem, but hopefully all the new ZFS additions will take care of these problems in the future. I've matured, I'll expect the same from ZFS on Linux. :wink:

Forum kindly sponsored by