10GB Network slow on NFS SMB TCP

Hi there,

because this is my first post, I want say thank you to the unknown members that helped me over time.
I’m using Manjaro for 2 years now and am very happy with it. Almost all problems could be solved thanks to the great support and endless nightshifts when I was penetrating google to find a solution.
I finally ended in a dead lock.
I hope you will help me out of this.

I do have two PC’s connected via two 10GB NIC’s. Both have fast nvme’s to work with.
Both are running the same Manjaro distribution and should be ‘strong’ enough to handle the Network traffic host/client.
Running iperf shows both directions are saturated in speed.
However dd on a share(nfs/smb) reflects the same poor read performance more or less I get in Dolphin transfer.

I’m happy with super write speeds up to saturation of the 10GB Lan. But reading files whatever size is super slow. In general it is about a third of the 1.2Gb/s I get when writing.
I’ve tried nfs4 and smb tunings to the limit now. I don’t think it will speed up significantly. Even when I find the perfect settings.

Before I had Truenas running on the smaller PC. I got 1,2Gb/s writes and 700MB/s reads there.
Maybe I should turn back to Truenas as server?
I’ve tested Manajro as server because I was thinking Truenas is slow in reads…

One thing - Shouldn’t it be vice versa? Less write speed?

So I wonder what information you might need in order to help me.
Winter is coming and I want to do video editing. I’m in the need for a fast NAS soon.

Thank you for your time,
Edgar

How are you accessing the NFS / SMB shares? Are you using mount / systemd / fstab, or are you using the built-in method within Dolphin itself?

Hi winnie,
thank you for quick response.
I use fstab on clients site and exports on server.

server exports: /mnt/intel-1/test 192.200.1.1(rw,async,no_root_squash,subtree_check
client fstab: 192.200.1.20:/mnt/intel-1/test /mnt/music nfs async,_netdev,noauto,users,x-systemd.automount,x-systemd.mount-timeout=10,timeo=14,x-systemd.idle-timeout=1min 0 0

I have a new guess. The speed could also depend on the server’s system volume?
It is kind of slow in a M.2 slot (2xSata3) (~700MB/~700MB). That also could explain the fastest speed I could get when reading. It matches the read speed of the root disk.
I will try to put the system disk into Pcie slot. That should double system root speed. I was thinking that system disk speed doesn’t care.

Cheers, Edgar

You used dd , but I think that I/O performance for read and write operations depend on two separate filesystems on two different devices and your configurations (if you have RAID, what type of RAID and etc…) and some hardware limitations, e.g. weak CPU on your server (it has to work a lot with filesystem’s process).

Manjaro server and Truenas have the same slow reading?

Do you mean dual SATA to M.2 adapter/converter? But there isn’t.

1 Like

Hi Zesko,
thxs for help.

I have just installed Truenas again. 1,2GB/s write and 600MB max read. Fresh install and no tuning.

The M2. slot on the board blocks 2 Sata ports when populated. But this idea is stupid. Doesn’t count.

Maybe the CPU of the server is too weak? Ram slots to the max. Intel(R) Core™ i7-4770K CPU @ 3.50GHz and 32GB RAM
Samsung Pro/Evo and Intel nvme to choose.

I also double checked dd command.
So I was wrong with dd on Truenas. It gets 2,1GB/s write and 1,8Gb/s read on a nvme pool in Truenas console.

Cheers, Edgar

@metanamorph which software did you use to share via NFS, did you use [root tip] [How To] Share data using NFS :thinking:

You might check out this also: #Performance_tuning @Archwiki-NFS, which mentions “tune the rsize and wsize mount options” :wink:

Hi TriMoon,

I exactly followed the given tutorial. Also the tuning link isn’t new to me unfortunately.
Right now the server runs on Truenas again.

One option to check. I will order a NIC with 4x Pci not 8x. I’m afraid the z97 board/cpu can not handle the 8x Pci. It is an asrock. Hopefully there are 4x Pci Nic’s that have 10Gb/s speed.

Best, Edgar

So you have 3 different disks, you only tested dd on the nvme SSD for the m.2 slot, if I understand correctly.

No, your CPU is strong enough for the small NAS at home, no Raspberry PI.

What options of dd did you use? Write/Read performance depends on blocksize and cache of the filesystem on RAM. Try to disable cache on RAM, to benchmark the real disk, not RAM.

I’m confused why different software on same hardware gives this much difference :woman_shrugging:

When it comes to using the M2 slot, yea some boards share PCIe slots with the M2 slot, like my ASUS Z99, there is even a BIOS setting for that which explains it being mutually exclusive.
Or maybe you need to change operating settings for your PCIe slots :woman_shrugging:

Manjaro is neither built nor configured for the server scenario, truenas however is - which will explain the differences.

There will be differences in read vs. write - no matter how you go around it.

When you are referring to dd - then I assume you are referring to disk images which can be several gigabytes and falls into the category huge files - depending on the size this will put a great strain on the server system.

Generally speaking the process of sending a huge file to a server

  • client is reading data from a storage
  • client buffer the data while sending
  • server buffer to cache
  • server writes cache to disk

The speed of that operation is determined by the server’s configuration - especially the amount af RAM available and how many simulatanous copying is going on.

But the sending system will also need to read the data from somewhere - which again creates a scenario where the read process may be faster than the transfer process thus queing data in the sending systems RAM.

The other way - reading a huge file from the server

  • the server will query the cache
  • the server will read the file from or into the cache and send it to the client
  • the client buffer to cache
  • the client writes cache to disk

The speed of that operation is only as fast as the connection and the server’s disk can read - and the result is very different whether reading from cache or disk. If the file must be read from disk in it’s entirety will be much slower compared to reading a file from cached RAM or worse cached in swapfile.

So there is no golden standard - there is no metrics you can refer to - it all depends on configuration.

I don’t remember where I read it but in some corporate environment they had issues with rsync and it turned out it was a previously undiscovered kernel bug - where they of course contributed the solution back to the kernel

The pont being - there is a lot of tuning options to select from - and for a high density server environment - I would definately go for an OS targeting such environment as Manjaro is not geared towards such solutions.

Your post explains it in more details as i would have but yea true…
Maybe he should use a different kernel instead as the default then :woman_shrugging:
(Because Linux is used in many scenarios)

Thank you linux-aarhus for the good explanation.
Yes, I switched to Truenas: But I gave Manjaro a try.

Well, there is a bottle neck in reading, no matter which OS. So I have ordered a Synology E10G18-T1
and will return the PCI Express 2.1 X8 card.

Crossing fingers here.

Anyway, thank you for support

Synology is an excellent product.

I am not sure what your intentions are or what you hope to achieve by ordering a Synology NIC.

I’d recommend buying some enterprise grade equipment instead of some kitchensink stuff where the composition may work against the desired result.

I still have the predecessor to 15-series (1010+) running and - other that the inevitable disk replacements - I have had no issues for many many years - I don’t recall the exact time but in late 00’s is a valid guess - bonding the 2x1G NIC onto one it has performed excellent for at time of writing approx. 15 years.

I was going to followup and say you should try using nconnect=16 as a mount option on the client.

1 Like

Ok, the new Synology NIC is mounted and I have the same 340-350MB/s slow read while write is still 1.1GB/s.

@winnie
Where should I try this nconnect=16

That let me think that the NIC in my Desktop is causing this. I will double check.
It’s an Aquantia AQC-107 10G on board.

Jesus! I want this solved.

Cheers, Edgar

***Edit
I’ve installed Truenas again.

The Aquantia should be fine. For a test run I put the other 10G NIC into the Desktop (Manjaro) in a PCI slot and disabled the Aquantia on board. The new Synology NIC is in the Server which is running Truenas.
Same slow read.

I also switched the direction of the new cat7 cable.
No change here…:frowning:

I should dig into console outputs I’m afraid. Which one is best to find errors in this case?

Thank you

$ sudo mount -t nfs -o rw,nconnect=16 <IP-address>:/<dataset> /mnt/<dataset>
1 Like

nconnect=16 !!!

Thank you! Full speed in both directions.

My fstab entry

serverip:/mnt/intel/dataset  /mnt/nas-intel nfs  sync,no_wdelay,nconnect=16,_netdev,noauto,users,x-systemd.automount,x-systemd.mount-timeout=10,timeo=14,x-systemd.idle-timeout=1min 0 0
read : 	0 	16.98 MiB 	1.06 GiB
write : 	0 	18.04 MiB 	1.18 GiB 

Never would have thought of that, because im not using any NFS, but im glad it got resolved.

/saved to database in mind for later recalls. :+1:

Well, I overjoyed too soon.

Manjaro could not mount the Nas pool. I should have been aware of the yellow mark on the folder in Dolphin. So copy and read where actually done on the mount folder at the local disk.
One of the fstab options wasn’t accepted.

Here we go again…