ZFS support + kernel, best approach

Hello, i have mentioned that on my data HDD the “ntfs.mount” process is utilizing single core per the “top”, “htop” (near 100% usage) + 45.0wa (iowait) CPU value. And i have seen some lags (for example HDD related utilities like Thunar, glances delaying to start). So i was thinking to install different FS in hope to solve this…

I need encryption at least on the part of the data (currently done using veracrypt containers which i think supports only NTFS, exFAT…) and i have read BTRFS encryption support is not ready yet. But i read ZFS ecryption should work (though i do not imagine how practically on USB external data drive):
https://wiki.archlinux.org/index.php/ZFS#Native_encryption
https://docs.oracle.com/cd/E53394_01/html/E54801/gkkih.html


ZFS setup on Manjaro LXCE?
I have read some tutorials which suggest to install things like:
https://aur.archlinux.org/packages/zfs-linux/
https://aur.archlinux.org/packages/zfs-dkms/

So i tried “pamac build zfs-linux”, it returned error.
Other command “pamac install zfs-dkms” shown no error, but i have not proceed wanted to ask what to do now so the ZFS support is installed the best/most supported way and are there any more steps needed to enable the support?

Me: Kernel: 5.8.11-1-MANJARO x86_64 bits: 64 Desktop: Xfce 4.14.2

Manjaro has prebuilt modules for zfs. You just install linux59-zfs

Replacing “linux59” with the actual kernel you are using.

If you are using the “latest” kernel meta package it is linux-latest-zfs

You can remove the zfs-dkms package.

1 Like

$ sudo pacman -Ss linux*|grep zfs

extra/linux414-zfs 0.8.5-3 (linux414-extramodules)
extra/linux419-zfs 0.8.5-3 (linux419-extramodules)
extra/linux44-zfs 0.8.5-3 (linux44-extramodules)
extra/linux49-zfs 0.8.5-3 (linux49-extramodules)
extra/linux54-zfs 0.8.5-3 (linux54-extramodules)
extra/linux57-zfs 0.8.5-1 (linux57-extramodules)
extra/linux58-zfs 0.8.5-3 (linux58-extramodules)
extra/linux59-zfs 0.8.5-2 (linux59-extramodules)
community/linux-latest-zfs 5.8-2 (linux-latest-extramodules)
community/linux-lts-zfs 1:5.4-3 (linux-lts-extramodules)
community/linux54-rt-zfs 0.8.4-7 (linux54-rt-extramodules)
community/linux56-rt-zfs 0.8.5-1 (linux56-rt-extramodules)

$ uname -r

5.8.11-1-MANJARO

so only thing i need to install to add ZFS support is:
$ sudo pacman -S linux58-zfs

resolving dependencies…
looking for conflicting packages…

Packages (2) zfs-utils-0.8.5-1 linux58-zfs-0.8.5-3

Total Download Size: 3,26 MiB
Total Installed Size: 8,79 MiB

:: Proceed with installation? [Y/n]

and then reboot or modprobe zfs;lsmod|grep zfs ?

@dalto says: If you are using the “latest” kernel meta package it is linux-latest-zfs

i am not sure if i am using meta package and if in my case (i mentioned kernel number) is better to use "linux-latest-zfs or “linux58-zfs”

$ sudo pamac info linux-latest-zfs

Name : linux-latest-zfs
Version : 5.8-2
Description : Kernel modules for the Zettabyte File System (metapackage)
URL : https://www.manjaro.org/
Licenses : GPL
Repository : community
Groups : linux-latest-extramodules
Depends On : linux58-zfs
Replaces : linux318-zfs linux420-zfs linux50-zfs linux51-zfs linux52-zfs linux53-zfs linux55-zfs linux56-zfs
Conflicts With : linux318-zfs linux420-zfs linux50-zfs linux51-zfs linux52-zfs linux53-zfs linux55-zfs linux56-zfs
Packager : Philip Mueller philm@manjaro.org
Build Date : 24.9.2020
Signatures : Yes

$ sudo pamac info linux58-zfs

Name : linux58-zfs
Version : 0.8.5-3
Description : Kernel modules for the Zettabyte File System.
URL : http://zfsonlinux.org/
Licenses : CDDL
Repository : extra
Installed Size : 1,6 MB
Groups : linux58-extramodules
Depends On : linux58 kmod zfs-utils=0.8.5
Make Dependencies : linux58-headers
Provides : zfs=0.8.5
Replaces : linux54-spl<=0.7.13
Packager : Philip Mueller philm@manjaro.org
Build Date : 17.10.2020
Signatures : Yes

? thank you

Can you share the output of pacman -Q | grep "^linux"

it is:

linux-api-headers 5.8-1
linux-firmware 20201005.r1732.58d41d0-1
linux-lts-headers 1:5.4-3
linux54 5.4.72-1
linux54-headers 5.4.72-1
linux58 5.8.16-2
linux58-headers 5.8.16-2

It looks like you need to install linux54-zfs and linux58-zfs. Then either reboot or manually load the zfs modules.

Unrelated to this, you should either remove linux-lts-headers or install linux-lts and linux-lts-zfs. Otherwise your lts kernel and headers versions will be different next time there is a new lts kernel.

It looks like you need to install linux54-zfs and linux58-zfs

@dalto thank you, and for other readers who may want the same, how did you figure out i need these two?

so i have ran “sudo pacman -S linux54-zfs linux58-zfs”

it shown among other output one fatal error (“Module zfs not found in directory /lib/modules/5.8.11-1-MANJARO”)

“lsmod|grep zfs” empty output and “modprobe zfs” outputs above mentioned fatal error…

$ find /lib/modules/ -iname “zfs

/lib/modules/extramodules-5.8-MANJARO/zfs.ko.gz
/lib/modules/extramodules-5.4-MANJARO/zfs.ko.gz

Because those are two the kernels you have installed which can be seen from the output above. Notably these lines:

linux54 5.4.72-1
linux54-headers 5.4.72-1
linux58 5.8.16-2
linux58-headers 5.8.16-2

This usually means your running kernel is different than your installed kernel. In this case, you are running version 5.8.11 but you have version 5.8.16 installed.

The solution is to reboot. As a side note, you should always reboot after a kernel update.

2 Likes

Thank you, indeed, after restart the installation worked and after reboot the zfs module was NOT loaded (unsure why), i had to manually add it:

$ zpool upgrade -v

The ZFS modules are not loaded.
Try running ‘/sbin/modprobe zfs’ as root to load them.

$ sudo /sbin/modprobe zfs
$ zpool upgrade -v

This system supports ZFS pool feature flags.

$ sudo zpool create data /dev/sdb
$ zpool list

NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
data  14,5T   110K  14,5T        -         -     0%     0%  1.00x    ONLINE  -

$ zpool status

  pool: data
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	data        ONLINE       0     0     0
	  sdb       ONLINE       0     0     0

errors: No known data errors

$ zfs list data

NAME   USED  AVAIL     REFER  MOUNTPOINT
data  88,5K  14,1T       24K  /data

$ sudo zfs create -o compression=on -o atime=off -o encryption=on -o keyformat=passphrase data/set
(data is pool name and set is dataset name. Something like a virtual device and its partition)

$ df -h|grep data

data             15T  128K   15T   1% /data
data/set         15T  128K   15T   1% /data/set

Mountpoint was not accessible by user due to mounted as root :-S . So i did: chown -R username:username /data

$ zfs get all|egrep “dedup|compres|atime”

...
data/set  compressratio         1.95x                    -
data/set  compression           on                       local
data/set  atime                 off                      local
data/set  dedup                 off                      default
data/set  refcompressratio      1.95x                    -
data/set  relatime              off                      default

$ sudo zfs umount data

$ sudo zfs mount data
$ zpool import

I hope i have not missed any basics. It is weird to noob like me that it do not prompt zfs descryption passphrase. after unmounting the pool and mounting again.

Comparison of the NTFS vs ZFS. Some values shows huge difference:

zfs-dkms is correct. You could also install the ready-to-go modules provided by Manjaro. One package per kernel. e.g.

linux54-zfs 0.8.5-3
linux58-zfs 0.8.5-3
linux59-zfs 0.8.5-2

But anyways, if zfs-dkms installed without issue you are all set. I assume zfs-dkms also pulls the zfs-utils package which is mandatory.

To check if everything is properly installed you can run:

zpool --version

If that fails to excecute, you have not loaded the zfs module. In that case do modprobe zfs and try again.

Then you just need to create the pool you and the datasets you want.

But be aware, If you use zfs on external drives you need to know that they will not be automounted after plugin. This is because the tools which typically do the automount do not understand zfs. You need to create your own service for automounting or you just do it manually with “zpool import mypool”. And you should export the pool befor you eject the drive: “zpool export mypool”.

There is a lot more about zfs which you should know. This is a good read to get started: ZFS - ArchWiki

EDIT:
Sorry, I didnt read through the whole, thread. I should have seen that you already have it running. My post here is obsolete. But I leave it like this instead of deleting it.

1 Like

The passphrase should be forgotten when you unmount the dataset. Please share the encryption parameters of the dataset:

zfs get all <dataset> | egrep "encr|key"

Benchmarking is a tricky thing. First of all, what kind of a device are you benchmarking here? Is this for the USB drive you where mentioning in your starting post? If yes, I would wonder regarding the 1,8 GB/s speed. Even USB 3.0 or 3.1 do not provide that speed. Not to mention that not any regular bare metal HDD is able to achieve more than ca. 200 MB/s. Please show the specs of the device.

If you want to optimize zfs for sequential read/write performance you should set recordsize=1M. And your benchmark test size should always match your RAM size, or bigger. This is to prevent any caching effects.

Thank you both. Here i describe the process that apparently ended in success:

$ zfs get all data/set | egrep "encr|key"
data/set  encryption            aes-256-gcm              -
data/set  keylocation           prompt                   local
data/set  keyformat             passphrase               -
data/set  encryptionroot        data/set                 -
data/set  keystatus             available                -

$ sudo zpool export data
$ sudo zpool import data

cannot import 'data': no such pool available

and also other commands “zpool import -a” not worked. I had to find partition number using “lsblk” and then this worked:
sudo zpool import -d /dev/sdb1 poolnamehere
though it not asked any decryption passphrase… maybe due to pool not encrypted, but dataset yes?
$ zfs get all|grep crypt

data      encryption            off                      default
data/set  encryption            aes-256-gcm              -
data/set  encryptionroot        data/set                 -

$ zpool get all data|grep crypt

data feature@encryption active local

first command shows encryption off and second active. When i tried “zpool create -o encryption=on” etc. it talks about invalid pool property:

$ zfs set encryption=on data

cannot set property for 'data': 'encryption' is readonly

Then i have found -O (not -o) parameter which seems to not end in error:
sudo zpool create -O encryption=on data2 /dev/sdb

/dev/sdb1 is part of exported pool ‘data’

I tried to destroy the pool (mounted and unmounted) as i have not found the way to apply encryption on it, but it shows fake message:
$ zpool destroy -f data
cannot open ‘data’: no such pool


But anyway following is the process that worked and is repeatable. At the end of this post i have one more question.

enable zfs support in kernel (it was not enabled in 5.8.16-2-MANJARO after reboot as mentioned)

sudo /sbin/modprobe zfs

attempt to create pool named “zfsp” with enryption, compression and atime off for improving iops:

sudo zpool create -o feature@encryption=enabled -O encryption=on -O keyformat=passphrase -O compression=on -O atime=off zfsp /dev/sdb

it mounted drive decrypted, though my mountpoint /zfsp had root acces rights, not user ones :-/, so i had to change it:
sudo chown -R user:user /zfsp

I could then copy files and such to my mount point /zfsp

unmounting/encrypting the pool and dataset

sudo zpool export zfsp
(“zfs umount” does not encrypt it, as the dataset can be mounted without passphrase)

loading the pool again (if correct term)

sudo zpool import zfsp

mounting/decrypting the dataset(if correct term?) with -l parameter to enter passphrase (else it complains “encryption key not loaded”)

sudo zfs mount zfsp -l


Then i am unsure if i need to create dataset while the above mentioned “zpool create” command made working enrypted data storage. Anyway i tried it:

zpool status;sudo zfs create -o compression=on -o atime=off -o encryption=on -o keyformat=passphrase zfsp/vd

result: $ sudo zfs list

NAME      USED  AVAIL     REFER  MOUNTPOINT
zfsp      406K  14,1T      115K  /zfsp
zfsp/vd  99,5K  14,1T     99,5K  /zfsp/vd

unsure how this vd is beneficial or if i need it when i was able to write directly to /zfsp anyway

Your feedback/ideas are very welcome. Thank you in advance if you get time to reply.

It wasn’t loaded because you didn’t have any zfs datasets being mounted and probably hadn’t enabled any of the zfs services so there was no reason to load it.

Other than that, zfs requires some upfront design and I think you need to understand the concepts of pools and datasets in a little more detail.

To put it in simplistic terms, you can think of a zpool as a container for your datasets. Likewise, the datasets represent the filesystem volumes.

I, personally, don’t recommend encrypting an entire zpool. That only limits your flexibility. Instead, encrypt the datasets within the zpool.

By default, zfs will mount your entire zpool under /zpoolname but there usually isn’t a reason to do that. I usually create zpools without a mountpoint and then mount the datasets instead. It will work either way, but the latter approach will get you thinking less about the zpool and more about the datasets themselves.

There are a couple of attributes that must be set on the zpool and cannot be changed later such as ashift so make sure you get those correct up front.

1 Like

Let me second what @dalto just explained.

You should not encrypt pools but datasets. This was the pool can always be imported. Once the pool is imported zfs tries to mount all the datasets in the pool. This mount process then requires the password.

And I am still suggesting to you to get some read o zfs.

https://wiki.archlinux.org/index.php/ZFS
https://wiki.archlinux.org/index.php/ZFS#Native_encryption

Better read this first. It can save you a lot of time and headaches.

This is reaction on my installed packages (pacman -Q | grep “^linux”):

linux-api-headers 5.8-1
linux-firmware 20201005.r1732.58d41d0-1
linux-lts-headers 1:5.4-3
linux54 5.4.72-1
linux54-headers 5.4.72-1
linux58 5.8.16-2
linux58-headers 5.8.16-2

so i have removed it (sudo pacman -R linux-lts-headers) and now the output is:

linux-api-headers 5.8-1
linux-firmware 20201005.r1732.58d41d0-1
linux54 5.4.72-1
linux54-headers 5.4.72-1
linux54-zfs 0.8.5-3
linux58 5.8.16-2
linux58-headers 5.8.16-2
linux58-zfs 0.8.5-3

ZFS seems to be working, so this should be ok.

indeed, thanks for mentioning all that and confirming some things. I tried to interrupt connection and it ends in suspended state and one may have to reboot or re-link the device to ZFS expected path if not imported zpool under disk ID (ls -l /dev/disk/by-id/) due to device name change after re-connection.

Benchmarking is a tricky thing

I should have used better tool, i think i will remove the screen as it seems useless. If anyone have NTFS drive and going to install ZFS, maybe better to test using something like this:

pamac install ioping --no-confirm 2>/dev/null|| yum install ioping -y -q 2>/dev/null|| apt-get install ioping -y -q 2>/dev/null ; fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75;rm test

sample result (ext4 luks Manjaro sys. drive Samsung 840)

Thanks i have read about the ashift thanks to you and set it accordingly. Regarding encryption this is how it is:

$ zpool get all | egrep “ashift|compress|encrypt”
zfsp ashift 12 local
zfsp feature@lz4_compress active local
zfsp feature@encryption active local

$ zfs get all | egrep “ashift|compress|encrypt”
zfsp compression off default
zfsp encryption off default

product of:
sudo zpool create -o ashift=12 -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@lz4_compress=enabled poolname /dev/disk/by-id/IDHERE(ls -l /dev/disk/by-id/)

yes, i have bookmarked the wiki page and reading it now.

One more question, today $ sudo pacman -Syu says this:
:: Replace linux58-headers with community/linux-latest-headers? [Y/n]
question: should i always confirm kernel kind of upgrades? Or what are commands to run that can tell me whether zfs support is in the new kernel so i do not break ZFS support?
sudo pacman -Ss zfs shows

community/linux-latest-zfs 5.10-1 (linux-latest-extramodules)
Kernel modules for the Zettabyte File System (metapackage)
community/linux-lts-zfs 1:5.4-4 (linux-lts-extramodules)
Kernel modules for the Zettabyte File System (metapackage)
community/linux54-rt-zfs 0.8.5-4 (linux54-rt-extramodules)
Kernel modules for the Zettabyte File System.

which is promising since it lists one that contains new kernel version (5.10.x), somehow it already updated?
$ sudo pacman -Q linux
linux510 5.10.2-2

UPDATE: yes, apparently pacman upgrade failed to keep zfs support and after upgrade & reboot i had to run:
$ pamac install linux-latest-zfs
$ sudo /sbin/modprobe zfs