Awful ZFS performance

I’m troubleshooting some awful ZFS performance. I don’t have non-Manjaro ZFS install available to test with now but that will be my next step.

For each test I formatted and trimmed the drive - a Crucial MX500 (500GB)
Test: sudo fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=50G --readwrite=randrw --rwmixread=75

zfs created with:
ashift=12.
atime=off
xattr=sa
primarycache=all
sync=disabled
logbias=throughput
kernel parameters:
init_on_alloc=0
init_on_free=0

EXT4:
read iops : min=10672, max=53146, avg=45851.66, stdev=7948.82, samples=428
write iops : min= 3464, max=17870, avg=15281.09, stdev=2657.34, samples=428

ZFS (linux62-zfs 2.1.11-6)
read iops : min= 28, max= 7698, avg=1219.65, stdev=920.52, samples=16123
write iops : min= 6, max= 2528, avg=406.43, stdev=307.76, samples=16123

ZFS (linux61-zfs 2.1.11-8)
read iops : min= 20, max= 8242, avg=879.93, stdev=919.81, samples=22347
write iops : min= 4, max= 2734, avg=293.24, stdev=307.28, samples=22346

As you can see - terrible performance.

Is there anything I missed? Should I use one of the other methods instead of the kernel module?
I’m stumped.

While I am not using ZFS, I can pretty much agree that ext4 outperforms ZFS by 2/3. The simple reason is that ZFS prioritize data integrity over performance and relay heavily on RAM Cache. There is just a lot of overhead, which reduced direct read/write speed.

Anyway, what ever reason you have, I don’t recommend using ZFS with Manjaro on a normal Desktop/Workstation. Only if you use it as an NAS, which would be a consideration.

1 Like