# zfs version
zfs-2.1.5-1
zfs-kmod-2.1.5-1
# uname -r
5.15.49-1-MANJARO
My system has 4 zfs pools, hpool, npool, fpool, and tank. All except tank (backup/archive) are auto mounted by seting ‘cachefile=/etc/zfs/zpool.cache’ on the pool. With hpool, works perfectly. But npool and fpool do not mount at boot.
When looking into the issue, I discovered that running ‘set cachefile=/etc/zfs/zpool.cache’ on any of my pools results in …
[jaro20n robert]# zpool set cachefile=/etc/zfs/zpool.cache hpool
[jaro20n robert]# zpool get cachefile hpool
NAME PROPERTY VALUE SOURCE
hpool cachefile - default
The value is set to “-”, not the path.
If I set cachefile to none, export, then reimport, the value returns to “-” default.
[jaro20n robert]# zpool set cachefile=none hpool
[jaro20n robert]# zpool get cachefile hpool
NAME PROPERTY VALUE SOURCE
hpool cachefile none local
[jaro20n robert]# zpool export hpool
[jaro20n robert]# zpool import hpool
[jaro20n robert]# zpool get cachefile hpool
NAME PROPERTY VALUE SOURCE
hpool cachefile - default
I was not having this problem at all prior to updating to 5.15.49-1-MANJARO
I posted that day with the issue that couldn’t mount any pools. I pulled your build, I thought, and it resolved the issue. So there is an even newer build?
Seems I have the correct version.
[jaro20n robert]# pamac info linux515-zfs
Name : linux515-zfs
Version : 2.1.5-1.0
Description : Kernel modules for the Zettabyte File System.
URL : http://zfsonlinux.org/
Licenses : CDDL
Repository : extra
Installed Size : 14.3 MB
Groups : linux515-extramodules
Depends On : linux515 kmod zfs-utils=2.1.5
Optional Dependencies : --
Provides : zfs=2.1.5
Replaces : --
Conflicts With : --
Packager : Mark Wagie <mark@manjaro.org>
Build Date : Tue 28 Jun 2022 12:58:43 PM CDT
Validated By : MD5 Sum SHA-256 Sum Signature
The original problem was that I could not import any pools. The build fixed that issue. I can now import all pools, manually. However, I cannot set cachefile now. So the behavior still seems odd. I’ve reboot many times and the cachefile issue persists.
However, what you’re saying about cachefile not being ‘permanent’ doesn’t seem right. I do recall “seeing” the path listed in “zpool get cachefile”. Am I wrong? Could I be chasing the wrong issue?
To summarize
zfs-import-cache and zfs-mount are running.
Import pool, set cachefile, export and re-import pool (just something I’ve always done).
Reboot
Pools are not imported automatically.
Manually run zpool import -a pools are imported ← if memory serves me, this is coming from cachefile
I did notice 2 things…
zfs-import-chace.service is “dead” in log below
deprecated message during boot
journalctl | grep zfs-import-cache.service
Jul 03 08:23:04 jaro20n udevadm[56149]: systemd-udev-settle.service is deprecated. Please fix zfs-import-cache.service not to pull it in
Log of my efforts
[jaro20n robert]# systemctl status zfs-import-cache.service
○ zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:zpool(8)
[jaro20n robert]# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/usr/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: active (exited) since Sun 2022-07-03 08:25:40 CDT; 9min ago
Docs: man:zfs(8)
Process: 2160 ExecStart=/usr/bin/zfs mount -a (code=exited, status=0/SUCCESS)
Main PID: 2160 (code=exited, status=0/SUCCESS)
CPU: 10ms
[jaro20n robert]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
hpool 2.59T 1.21T 1.38T - - 41% 46% 1.03x ONLINE -
[jaro20n robert]# zpool import fpool
[jaro20n robert]# zpool get cachefile fpool
NAME PROPERTY VALUE SOURCE
fpool cachefile - default
[jaro20n robert]# zpool set cachefile=/etc/zfs/zpool.cache fpool
[jaro20n robert]# zpool export fpool
[jaro20n robert]# zpool import fpool
REBOOT
[jaro20n robert]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
hpool 2.59T 1.21T 1.38T - - 41% 46% 1.03x ONLINE -
[jaro20n robert]# ls -l /etc/zfs/zpool.cache
-rw-r--r-- 1 root root 5220 Jul 3 08:38 /etc/zfs/zpool.cache
[jaro20n robert]# zpool import -a
[jaro20n robert]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
fpool 3.62T 3.06T 584G - - 22% 84% 1.00x ONLINE -
hpool 2.59T 1.21T 1.38T - - 41% 46% 1.03x ONLINE -
npool 4.55T 2.50T 2.05T - - 36% 54% 1.00x ONLINE -
cachefile is not a property saved to the pool itself, so a “-” value is normal. It instructs ZFS to cache the pool’s devices, options, etc, to bring it up quickly the next reboot. It’s unique to the system/computer. (You can inspect the file /etc/zfs/zpool.cache after setting the cachefile for the pool.)
It seems there’s an issue with your services in which the normal operation to activate your pools upon reboot is not working.
One shot in the dark is to export all the pools, then manually delete the zpool.cache file, and then re-import everything manually, set the cachefile, and reboot (without exporting first.)
See if it possibly “cleans up” an errant cachefile or resets things to work properly again.
I notice that my zpool.cache file correctly populates upon pool import (no need to specify the cachefile since it defaults to /etc/zfs/zpool.cache).
My zfs-import-cache.service is enabled. Upon reboot, no errors in the journal. Status shows no issues either.
However, if I manually invoke the service, then everything works. (Pool is imported via the zpool.cache file.)
sudo systemctl start zfs-import-cache.service
Keep in mind there was no need to do anything special with importing or exporting, nor did I need to set the cachefile, as ZFS defaults to /etc/zfs/zpool.cache anyways.
So an update might have introduced an issue somewhere in the service(s) (or relations between services/targets) that no longer does this automatically?
EDIT: It also might be due to the fact that, for whatever reason, the zfs-import-cache service is running too soon during the boot sequence, before the zfs module is loaded and/or the devices are ready. (Hence, manually invoking zfs-import-cache.service soon after bootup works as expected, without anything special needed.)
EDIT 2: My hunch is there was a change in zfs-utils from version 2.1.4 → 2.1.5 that introduced this issue.
Thanks for looking into. I can manually mount for now and hope a future release resolves. If it were a server, it would be a major problem, however, it’s my workstation, so I can manage.
BTW… I did try shifting zfs in my mkinitcpio.conf, even to the last spot – no difference.
This might sound obvious, but did you rebuild the initramfs afterwards?
EDIT: You might have to use/enable the zfs-zed service now.
This is part of the handful of reasons why I don’t foresee ZFS as a daily driver for desktop Linux. Too many hoops to jump through, and it’s still a second-class citizen.
One alternative, in the meantime, is to stop/disable any use of the zfs-import-cache service, and rather enable an import service for each pool, using the included systemd unit templates.
This will activate your pools without using the cachefile. It shouldn’t be slow if you don’t have too many devices on the system.
This appears to be an on-and-off perennial issue with Ubuntu, Arch, Manjaro, and other systemd distros, so it’s not unique for Manjaro users.