ZFS pool does not load during boot (using zfs-import-cache/zfs-mount) but no problem when manually running zfs-import-cache

TLDR:

  • ZFS pool is not mounted/imported during boot even though zfs-import-cache/zfs-mount have been enabled
  • looking at lsmod/journalctl, the ZFS module has been loaded
  • no problem manually importing/mounting the ZFS pool by running systemctl restart zfs-import-cache and systemctl restart zfs-mount

Here’s some console commands that I ran to try and investigate the issue (the journalctl command only includes the zfs entry after I restarted the server)

    ➜  ssh myuser@192.168.1.42
    myuser@192.168.1.42's password:
    
    Welcome to fish, the friendly interactive shell
    Type help for instructions on how to use fish
    
    myuser@myserver ~> zpool list
    no pools available
    
    myuser@myserver ~> zpool status
    no pools available
    
    myuser@myserver ~> zdb
    storage-myserver:
        version: 5000
        name: 'storage-myserver'
        state: 0
        txg: 1375447
        pool_guid: 617472370471881449724
        errata: 0
        hostname: 'myserver'
        com.delphix:has_per_vdev_zaps
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 6172370456471881449724
            create_txg: 4
            children[0]:
                type: 'raidz'
                id: 0
                guid: 1748034220473493990457
                nparity: 2
                metaslab_array: 138
                metaslab_shift: 34
                ashift: 13
                asize: 48009362538496
                is_log: 0
                create_txg: 4
                com.delphix:vdev_zap_top: 129
                children[0]:
                    type: 'disk'
                    id: 0
                    guid: 1391340476743518267315
                    path: '/dev/sdb'
                    whole_disk: 1
                    DTL: 5027
                    create_txg: 4
                    com.delphix:vdev_zap_leaf: 130
                children[1]:
                    type: 'disk'
                    id: 1
                    guid: 1784473315434353960342
                    path: '/dev/sdd'
                    whole_disk: 1
                    DTL: 5026
                    create_txg: 4
                    com.delphix:vdev_zap_leaf: 131
                children[2]:
                    type: 'disk'
                    id: 2
                    guid: 1148923454936736091715
                    path: '/dev/sdf'
                    whole_disk: 1
                    DTL: 5025
                    create_txg: 4
                    com.delphix:vdev_zap_leaf: 132
                children[3]:
                    type: 'disk'
                    id: 3
                    guid: 1382088269334592035142
                    path: '/dev/sdh'
                    whole_disk: 1
                    DTL: 5024
                    create_txg: 4
                    com.delphix:vdev_zap_leaf: 133
                children[4]:
                    type: 'disk'
                    id: 4
                    guid: 18112733835124758481
                    path: '/dev/sdc'
                    whole_disk: 1
                    DTL: 5023
                    create_txg: 4
                    com.delphix:vdev_zap_leaf: 134
                children[5]:
                    type: 'disk'
                    id: 5
                    guid: 9964993604748294083
                    path: '/dev/sde'
                    whole_disk: 1
                    DTL: 5022
                    create_txg: 4
                    com.delphix:vdev_zap_leaf: 135
                children[6]:
                    type: 'disk'
                    id: 6
                    guid: 463127445797571349817
                    path: '/dev/sdg'
                    whole_disk: 1
                    DTL: 5021
                    create_txg: 4
                    com.delphix:vdev_zap_leaf: 136
                children[7]:
                    type: 'disk'
                    id: 7
                    guid: 754974834897868814160
                    path: '/dev/sdi'
                    whole_disk: 1
                    DTL: 5020
                    create_txg: 4
                    com.delphix:vdev_zap_leaf: 137
        features_for_read:
            com.delphix:hole_birth
            com.delphix:embedded_data
            
    myuser@myserver ~> systemctl status zfs-import-cache
    ○ zfs-import-cache.service - Import ZFS pools by cache file
         Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
         Active: inactive (dead)
           Docs: man:zpool(8)
           
    myuser@myserver ~ [3]> systemctl status zfs-mount
    ○ zfs-mount.service - Mount ZFS filesystems
         Loaded: loaded (/usr/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
         Active: inactive (dead)
           Docs: man:zfs(8)
           
    myuser@myserver ~ [3]> journalctl | grep -i zfs
    Jan 28 13:25:43 my-nas kernel: ZFS: Loaded module v2.1.2-1, ZFS pool version 5000, ZFS filesystem version 5
    Jan 28 13:25:43 my-nas systemd-modules-load[323]: Inserted module 'zfs'
    
    myuser@myserver ~> lsmod | grep -i zfs
    zfs                  3899392  0
    zunicode              335872  1 zfs
    zzstd                 577536  1 zfs
    zlua                  184320  1 zfs
    zavl                   16384  1 zfs
    icp                   323584  1 zfs
    zcommon               102400  2 zfs,icp
    znvpair               106496  2 zfs,zcommon
    spl                   118784  6 zfs,icp,zzstd,znvpair,zcommon,zavl
    
    myuser@myserver ~ [4]> sudo -i
    [sudo] password for myuser:
    
    [root@myserver ~]# systemctl restart zfs-import-cache
    [root@myserver ~]# systemctl restart zfs-mount
    
    [root@myserver ~]# zpool list
    NAME             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    storage-myserver  43.7T  18.1T  25.6T        -         -     0%    41%  1.00x    ONLINE  -
    
    [root@myserver ~]# reboot now
    Connection to 192.168.1.42 closed by remote host.
    Connection to 192.168.1.42 closed.
    
    mydesktop in ~ took 9m35s
    
    ➜  ssh myuser@192.168.1.42
    myuser@192.168.1.42's password:
    
    Welcome to fish, the friendly interactive shell
    Type help for instructions on how to use fish
    
    myuser@myserver ~> sudo -i
    [sudo] password for myuser:
    
    [root@myserver ~]# systemctl restart zfs-import-cache
    
    [root@myserver ~]# systemctl restart zfs-mount
    
    [root@myserver ~]# zpool list
    NAME             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    storage-myserver  43.7T  18.1T  25.6T        -         -     0%    41%  1.00x    ONLINE  -
  • Does this happen, when booting with the fallback-initramdisk (selected through grub) ?
  • Is it necessary to load the zfs-module early in the boot-process ?
  • Is it necessary to mount zfs pool later in the boot process ?

Please provide Information:

I have exactly the opposite …

By default my system during boot reads the zfs-import-cache service and hangs up if a specfic device is not present …
So I had to manually create a service that completly exports all the zfs pools each shutdown | reboot .

Are your zpool devices available during the bootup process ?