Zfs cannot set properties, e.g., cachefile

# zfs version
zfs-2.1.5-1
zfs-kmod-2.1.5-1

# uname -r
5.15.49-1-MANJARO

My system has 4 zfs pools, hpool, npool, fpool, and tank. All except tank (backup/archive) are auto mounted by seting ‘cachefile=/etc/zfs/zpool.cache’ on the pool. With hpool, works perfectly. But npool and fpool do not mount at boot.

When looking into the issue, I discovered that running ‘set cachefile=/etc/zfs/zpool.cache’ on any of my pools results in …

[jaro20n robert]# zpool set cachefile=/etc/zfs/zpool.cache hpool
[jaro20n robert]# zpool get cachefile hpool
NAME   PROPERTY   VALUE      SOURCE
hpool  cachefile  -          default

The value is set to “-”, not the path.
If I set cachefile to none, export, then reimport, the value returns to “-” default.

[jaro20n robert]# zpool set cachefile=none hpool
[jaro20n robert]# zpool get cachefile hpool
NAME   PROPERTY   VALUE      SOURCE
hpool  cachefile  none       local
[jaro20n robert]# zpool export hpool
[jaro20n robert]# zpool import hpool
[jaro20n robert]# zpool get cachefile hpool
NAME   PROPERTY   VALUE      SOURCE
hpool  cachefile  -       default

I was not having this problem at all prior to updating to 5.15.49-1-MANJARO

Any thoughts?

Update your zfs packages again? (Then reboot.)

What is the package version for linux515-zfs ?

There was a recent update in which linux515-zfs was built against kernel 5.15.50, but has been corrected in a package update to linux515-zfs.

You should have the package linux515-zfs as version 2.5.1-1.0

Not 2.5.1-1, but actually 2.5.1-1.0.


pamac info linux515-zfs

Name                  : linux515-zfs
Version               : 2.1.5-1.0
Provides              : zfs=2.1.5
Build Date            : Tue 28 Jun 2022 01:58:43 PM EDT

I posted that day with the issue that couldn’t mount any pools. I pulled your build, I thought, and it resolved the issue. So there is an even newer build?

Seems I have the correct version.

[jaro20n robert]# pamac info linux515-zfs
Name                  : linux515-zfs
Version               : 2.1.5-1.0
Description           : Kernel modules for the Zettabyte File System.
URL                   : http://zfsonlinux.org/
Licenses              : CDDL
Repository            : extra
Installed Size        : 14.3 MB
Groups                : linux515-extramodules
Depends On            : linux515 kmod zfs-utils=2.1.5
Optional Dependencies : --
Provides              : zfs=2.1.5
Replaces              : --
Conflicts With        : --
Packager              : Mark Wagie <mark@manjaro.org>
Build Date            : Tue 28 Jun 2022 12:58:43 PM CDT
Validated By          : MD5 Sum  SHA-256 Sum  Signature

Did it resolve the issue after the reboot? Your post implies it did?

The original problem was that I could not import any pools. The build fixed that issue. I can now import all pools, manually. However, I cannot set cachefile now. So the behavior still seems odd. I’ve reboot many times and the cachefile issue persists.

What if you first manually and safely export the pools. Then manually re-import them. Then set the cachefile. Then reboot?

Remember, cachefile is not a permanent property. Think of it as a “dump” to be useful for subsequent reboots.

Negative ‘Ghostrider’ – Same issue.

However, what you’re saying about cachefile not being ‘permanent’ doesn’t seem right. I do recall “seeing” the path listed in “zpool get cachefile”. Am I wrong? Could I be chasing the wrong issue?

To summarize

  • zfs-import-cache and zfs-mount are running.
  • Import pool, set cachefile, export and re-import pool (just something I’ve always done).
  • Reboot
  • Pools are not imported automatically.
  • Manually run zpool import -a pools are imported ← if memory serves me, this is coming from cachefile

I did notice 2 things…

  1. zfs-import-chace.service is “dead” in log below
  2. deprecated message during boot
journalctl | grep zfs-import-cache.service
Jul 03 08:23:04 jaro20n udevadm[56149]: systemd-udev-settle.service is deprecated. Please fix zfs-import-cache.service not to pull it in 

Log of my efforts

[jaro20n robert]# systemctl status zfs-import-cache.service
○ zfs-import-cache.service - Import ZFS pools by cache file
     Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: man:zpool(8)
[jaro20n robert]# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
     Loaded: loaded (/usr/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
     Active: active (exited) since Sun 2022-07-03 08:25:40 CDT; 9min ago
       Docs: man:zfs(8)
    Process: 2160 ExecStart=/usr/bin/zfs mount -a (code=exited, status=0/SUCCESS)
   Main PID: 2160 (code=exited, status=0/SUCCESS)
        CPU: 10ms
[jaro20n robert]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hpool  2.59T  1.21T  1.38T        -         -    41%    46%  1.03x    ONLINE  -
[jaro20n robert]# zpool import fpool
[jaro20n robert]# zpool get cachefile fpool
NAME   PROPERTY   VALUE      SOURCE
fpool  cachefile  -          default
[jaro20n robert]# zpool set cachefile=/etc/zfs/zpool.cache fpool
[jaro20n robert]# zpool export fpool
[jaro20n robert]# zpool import fpool

REBOOT

[jaro20n robert]# zpool  list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
hpool  2.59T  1.21T  1.38T        -         -    41%    46%  1.03x    ONLINE  -
[jaro20n robert]# ls -l /etc/zfs/zpool.cache
-rw-r--r-- 1 root root 5220 Jul  3 08:38 /etc/zfs/zpool.cache
[jaro20n robert]# zpool import -a
[jaro20n robert]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
fpool  3.62T  3.06T   584G        -         -    22%    84%  1.00x    ONLINE  -
hpool  2.59T  1.21T  1.38T        -         -    41%    46%  1.03x    ONLINE  -
npool  4.55T  2.50T  2.05T        -         -    36%    54%  1.00x    ONLINE  -

cachefile is not a property saved to the pool itself, so a “-” value is normal. It instructs ZFS to cache the pool’s devices, options, etc, to bring it up quickly the next reboot. It’s unique to the system/computer. (You can inspect the file /etc/zfs/zpool.cache after setting the cachefile for the pool.)

It seems there’s an issue with your services in which the normal operation to activate your pools upon reboot is not working.

Are you exporting the pools on shutdown/reboot?

What does this reveal?

journalctl -u zfs-import-cache.service
[jaro20n robert]# journalctl -u zfs-import-cache.service
Jul 03 08:23:04 jaro20n systemd[1]: Starting Import ZFS pools by cache file...
Jul 03 08:23:04 jaro20n zpool[56150]: no pools available to import
Jul 03 08:23:04 jaro20n systemd[1]: Finished Import ZFS pools by cache file.

Maybe an update did indeed bork the service?

I’ll try to test something out on my end.

One shot in the dark is to export all the pools, then manually delete the zpool.cache file, and then re-import everything manually, set the cachefile, and reboot (without exporting first.)

See if it possibly “cleans up” an errant cachefile or resets things to work properly again.

I did try that. That was one of the very first things I tried.

Getting the same issue here.

I notice that my zpool.cache file correctly populates upon pool import (no need to specify the cachefile since it defaults to /etc/zfs/zpool.cache).

My zfs-import-cache.service is enabled. Upon reboot, no errors in the journal. Status shows no issues either.

However, if I manually invoke the service, then everything works. (Pool is imported via the zpool.cache file.)

sudo systemctl start zfs-import-cache.service

Keep in mind there was no need to do anything special with importing or exporting, nor did I need to set the cachefile, as ZFS defaults to /etc/zfs/zpool.cache anyways.


So an update might have introduced an issue somewhere in the service(s) (or relations between services/targets) that no longer does this automatically?


EDIT: It also might be due to the fact that, for whatever reason, the zfs-import-cache service is running too soon during the boot sequence, before the zfs module is loaded and/or the devices are ready. (Hence, manually invoking zfs-import-cache.service soon after bootup works as expected, without anything special needed.)


EDIT 2: My hunch is there was a change in zfs-utils from version 2.1.4 → 2.1.5 that introduced this issue.

Thanks for looking into. I can manually mount for now and hope a future release resolves. If it were a server, it would be a major problem, however, it’s my workstation, so I can manage.

BTW… I did try shifting zfs in my mkinitcpio.conf, even to the last spot – no difference.

This might sound obvious, but did you rebuild the initramfs afterwards?


EDIT: You might have to use/enable the zfs-zed service now.

This is part of the handful of reasons why I don’t foresee ZFS as a daily driver for desktop Linux. Too many hoops to jump through, and it’s still a second-class citizen.

One alternative, in the meantime, is to stop/disable any use of the zfs-import-cache service, and rather enable an import service for each pool, using the included systemd unit templates.

This will activate your pools without using the cachefile. It shouldn’t be slow if you don’t have too many devices on the system.

This appears to be an on-and-off perennial issue with Ubuntu, Arch, Manjaro, and other systemd distros, so it’s not unique for Manjaro users. :person_shrugging:

That is probably a good solution for me. Thanks.