10GB Network slow on NFS SMB TCP

I was going to followup and say you should try using nconnect=16 as a mount option on the client.

1 Like

Ok, the new Synology NIC is mounted and I have the same 340-350MB/s slow read while write is still 1.1GB/s.

@winnie
Where should I try this nconnect=16

That let me think that the NIC in my Desktop is causing this. I will double check.
It’s an Aquantia AQC-107 10G on board.

Jesus! I want this solved.

Cheers, Edgar

***Edit
I’ve installed Truenas again.

The Aquantia should be fine. For a test run I put the other 10G NIC into the Desktop (Manjaro) in a PCI slot and disabled the Aquantia on board. The new Synology NIC is in the Server which is running Truenas.
Same slow read.

I also switched the direction of the new cat7 cable.
No change here…:frowning:

I should dig into console outputs I’m afraid. Which one is best to find errors in this case?

Thank you

$ sudo mount -t nfs -o rw,nconnect=16 <IP-address>:/<dataset> /mnt/<dataset>
1 Like

nconnect=16 !!!

Thank you! Full speed in both directions.

My fstab entry

serverip:/mnt/intel/dataset  /mnt/nas-intel nfs  sync,no_wdelay,nconnect=16,_netdev,noauto,users,x-systemd.automount,x-systemd.mount-timeout=10,timeo=14,x-systemd.idle-timeout=1min 0 0
read : 	0 	16.98 MiB 	1.06 GiB
write : 	0 	18.04 MiB 	1.18 GiB 

Never would have thought of that, because im not using any NFS, but im glad it got resolved.

/saved to database in mind for later recalls. :+1:

Well, I overjoyed too soon.

Manjaro could not mount the Nas pool. I should have been aware of the yellow mark on the folder in Dolphin. So copy and read where actually done on the mount folder at the local disk.
One of the fstab options wasn’t accepted.

Here we go again…

Instead of using fstab (which is old way) try to create systemd.mount units which you can start and stop to test.

Thank you, I will change to systemd.mount
That hopefully will also avoid Dolphin’s freezing from time to time.

Since this topic is no longer Manjaro related I switched to the Truenas forum.
I can’t place a link at my early stage here.

edited to make link work

Which mount option was not accepted?

Were your previous tests going to-and-from the server, or were they also contained to the local disks?

Are you saying you never once actually mounted a working NFS share from the server to your Manjaro client?


Somewhat off-topic, but for best throughput performance, use 1 MiB recordsize (not 128 KiB). If the main purpose of the dataset is to read and write files similar in vein to files on a typical desktop, then 1 MiB recordsize works great. Smaller recordsizes sacrifice raw performance for the benefit of minimizing “write amplification”, which is unlikely to be your case unless you’re using specific software or databases.


Larger ZFS recordsize:

  • :white_check_mark: Higher raw throughput performance, especially for reading/writing typical files
  • :white_check_mark: Superior and more efficient inline compression
  • :white_check_mark: Less ZFS overhead
  • :warning: Write amplification if frequently doing “small modifications” on large files
  • :no_entry: Terrible for database software

Smaller ZFS recordsize:

  • :white_check_mark: Less likely for write amplifications to occur
  • :white_check_mark: Perfect for database software that writes/fetches/modifies at a specific size (i.e, 16 KiB)
  • :warning: Slower raw throughput performance
  • :warning: Inferior and less efficient inline compression
  • :warning: More ZFS overhead
1 Like

Where did he mention he was using ZFS as filesystem? :thinking:

Indirectly. :point_down:

TrueNAS only allows a pure ZFS ecosystem. (While you can “import” Ext4 and NTFS, for example, they are for read-only purposes to copy data over to your ZFS pools.)

1 Like

aha thanks, didn’t know that cause i never used TrueNAS :wink:

Hi winnie,

as far as I remember the wrong fstab option was “wdelay” or something. Can not remember, sorry.
I had to empty the mymount.automount from emergency manjaro iso drive. My system did not boot. Now all is fine.

I have set up systemd mount instead using fstab.

mymount.mount:

[Unit]
Description=NAS share using NFS

[Mount]
What=192.200.1.0:/mnt/poolintel/dataset
Where=/mnt/nas/data
Type=nfs
Options=_netdev,vers=4,users,noatime

[Install]
WantedBy=multi-user.target


mymount.automount:

[Unit]
Description=Nas share
ConditionPathExists=/mnt/nas/data

[Automount]
Where=/mnt/nas/data
TimeoutIdleSec=60

[Install]
WantedBy=multi-user.target

However no success on read speed from Truenas nfs share.
Fio results are good and iperf3 also. But still ~300MB/s read using Dolphin.

Thanks for help…

That condition is not needed, because the automount unit’s only functionality is to active the mount unit.

You can enable and start only the automount unit without the mount unit for it to work…
So there’s no need for the [Install] section in the mount unit either.

PS:
A point of error you made is that both unit’s should be named according to the mount point they create in the “Where=” line inside.
In your case systemd-escape --path /mnt/nas/data gives “mnt-nas-data”, so you should name your units:

  • mnt-nas-data.mount
  • mnt-nas-data.automount

After that you can use systemctl cat mnt-nas-data.{auto,}mount to get the config printed out by ststemd, inclusive their path at top of that output.
You could even use systemctl status /mnt/nas/data to get the status of the mount unit :wink:
(This will activate the mount if needed, because it accesses the path)

Seems you forgot to add that to the options line :wink:

Did all changes including the nconnect magic. Well, up to 400MB/s read from Nas share.
Better than nothing.

How to set the MTU=9000 in manjaro? Could not find any working solution.
Have to set ifconfig enp6s0 mtu 9000 each reboot.

Thxs…

Edit***
I have two intel ssd on stripe pool in Truenas. So the performance on pool site should be enough.

That can be done via systemd units as well, see: MTUBytes

Hi all,

I find myself hating this share stuff. I will put this on hold for a while.
Never thought that it will be that difficult. I cannot see any error messages no more. What a mess!

I will put the a 8TB HDD into my Desktop and will be happy for a while. At least I can start doing fun stuff.

sorry for stealing your time.

What kind of error messages were you looking for?
Any logs can be accessed using journalctl on a system using systemd :woman_shrugging:
Check man journalctl

I gave up smb share. Not possible to find any working solution and I can not stand error messages no more. I know where they are.
I went back to NFS and accept slow read for now. SMB was a dead end for me.
The performance seems to be less then NFS anyway.