Instead of using fstab
(which is old way) try to create systemd.mount
units which you can start and stop to test.
Thank you, I will change to systemd.mount
That hopefully will also avoid Dolphin’s freezing from time to time.
Since this topic is no longer Manjaro related I switched to the Truenas forum.
I can’t place a link at my early stage here.
edited to make link work
Which mount option was not accepted?
Were your previous tests going to-and-from the server, or were they also contained to the local disks?
Are you saying you never once actually mounted a working NFS share from the server to your Manjaro client?
Somewhat off-topic, but for best throughput performance, use 1 MiB recordsize (not 128 KiB). If the main purpose of the dataset is to read and write files similar in vein to files on a typical desktop, then 1 MiB recordsize works great. Smaller recordsizes sacrifice raw performance for the benefit of minimizing “write amplification”, which is unlikely to be your case unless you’re using specific software or databases.
Larger ZFS recordsize:
Higher raw throughput performance, especially for reading/writing typical files
Superior and more efficient inline compression
Less ZFS overhead
Write amplification if frequently doing “small modifications” on large files
Terrible for database software
Smaller ZFS recordsize:
Less likely for write amplifications to occur
Perfect for database software that writes/fetches/modifies at a specific size (i.e, 16 KiB)
Slower raw throughput performance
Inferior and less efficient inline compression
More ZFS overhead
Where did he mention he was using ZFS
as filesystem?
Indirectly.
TrueNAS only allows a pure ZFS ecosystem. (While you can “import” Ext4 and NTFS, for example, they are for read-only purposes to copy data over to your ZFS pools.)
aha thanks, didn’t know that cause i never used TrueNAS
Hi winnie,
as far as I remember the wrong fstab option was “wdelay” or something. Can not remember, sorry.
I had to empty the mymount.automount from emergency manjaro iso drive. My system did not boot. Now all is fine.
I have set up systemd mount instead using fstab.
mymount.mount:
[Unit]
Description=NAS share using NFS
[Mount]
What=192.200.1.0:/mnt/poolintel/dataset
Where=/mnt/nas/data
Type=nfs
Options=_netdev,vers=4,users,noatime
[Install]
WantedBy=multi-user.target
mymount.automount:
[Unit]
Description=Nas share
ConditionPathExists=/mnt/nas/data
[Automount]
Where=/mnt/nas/data
TimeoutIdleSec=60
[Install]
WantedBy=multi-user.target
However no success on read speed from Truenas nfs share.
Fio results are good and iperf3 also. But still ~300MB/s read using Dolphin.
Thanks for help…
That condition is not needed, because the automount
unit’s only functionality is to active the mount
unit.
You can enable
and start only the automount
unit without the mount
unit for it to work…
So there’s no need for the [Install]
section in the mount
unit either.
PS:
A point of error you made is that both unit’s should be named according to the mount point they create in the “Where=” line inside.
In your case systemd-escape --path /mnt/nas/data gives “mnt-nas-data”, so you should name your units:
mnt-nas-data.mount
mnt-nas-data.automount
After that you can use systemctl cat mnt-nas-data.{auto,}mount to get the config printed out by ststemd, inclusive their path at top of that output.
You could even use systemctl status /mnt/nas/data to get the status of the mount unit
(This will activate the mount if needed, because it accesses the path)
Seems you forgot to add that to the options line
Did all changes including the nconnect magic. Well, up to 400MB/s read from Nas share.
Better than nothing.
How to set the MTU=9000 in manjaro? Could not find any working solution.
Have to set ifconfig enp6s0 mtu 9000 each reboot.
Thxs…
Edit***
I have two intel ssd on stripe pool in Truenas. So the performance on pool site should be enough.
That can be done via systemd units as well, see: MTUBytes
Hi all,
I find myself hating this share stuff. I will put this on hold for a while.
Never thought that it will be that difficult. I cannot see any error messages no more. What a mess!
I will put the a 8TB HDD into my Desktop and will be happy for a while. At least I can start doing fun stuff.
sorry for stealing your time.
What kind of error messages were you looking for?
Any logs can be accessed using journalctl
on a system using systemd
Check man journalctl…
I gave up smb share. Not possible to find any working solution and I can not stand error messages no more. I know where they are.
I went back to NFS and accept slow read for now. SMB was a dead end for me.
The performance seems to be less then NFS anyway.