Nemo and slow transfer rate issue

Hi, two months ago I switched from Linux Mint 19.3 to Manjaro 59 with Cinnamon DE, but I realized that in my gigabit LAN file transfer rate between the main client PC and the Home Ubuntu Server uploading a file from local to a samba shared folder with Nemo FM is 30-35 Mb/s less than before using Linux Mint!

So I’ve done some tests with live versions of Manjaro: with Manjaro Cinnamon 59, kernel 5.9.11-3-MANJARO, Nemo version 4.8.0, samba version 4.13.2-1, gvfs-smb version 1.46.1-1 Nemo transfer rate copying one or more files is about 70-75 Mb/s, while with Manjaro Cinnamon 510, kernel 5.10.23-1-MANJARO, Nemo version 4.8.6, samba version 4.14.0-1, gvfs-smb version 1.46.2-1 Nemo transfer rate copying one or more files is not more than 40 Mb/s, with a loss of about 30 Mb/s.

The output of lspci -v is:

00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V
DeviceName: Onboard LAN
Subsystem: Gigabyte Technology Co., Ltd Device e000
Flags: bus master, fast devsel, latency 0, IRQ 30
Memory at f7800000 (32-bit, non-prefetchable) [size=128K]
Memory at f783a000 (32-bit, non-prefetchable) [size=4K]
I/O ports at f080 [size=32]
Capabilities: [c8] Power Management version 2
Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [e0] PCI Advanced Features
Kernel driver in use: e1000e
Kernel modules: e1000e

Please, could someone help me understand my problem by suggesting further commands? I am a Manjaro Linux newbie. Thanks!

UPDATE: I decided to make a fresh install of Manjaro linux 59 on an external usb disk then updated system with pacman -Syu and the result is:

kernel: 5.9.16-1
cinnamon 4.8.6
samba version 4.14.2
gvfs-smb version 1.48.0-2
Nemo version 4.8.6
transfer rate: 30 Mb/s!!!

40 Mb/s less than initial live system boot up!

Can you test the speeds using rsync through the command-line after you mount the SMB share?

Make sure the file is large enough to get a fairly weighted average speed.

First flush any write cache to the disks:

sync

Next, create a very large 2GB file with lots of random data. (Do not run this “dd” command with sudo, and double-check the command to be on the safe side.)

dd if=/dev/urandom bs=1M count=2048 of=myVeryLargeFile.ext status=progress

Flush the cache again to be sure:

sync

Then finally run rsync, and “time” it:

time rsync -v --progress myVeryLargeFile.ext /run/media/username/gvfs/smb-share/

Take note of the trailing slash ( / ) for the “destination”. (It’s very important.)

The actual path to your SMB share will be different.

How long did it “really” take according to “time”?

UPDATE: In my testing I got this result:
real 0m19.313s

The file size is 2048 MB, so “2048 divided by 19.3” is about 105 MB/s.

Hi winnie, thank for your reply!
I’ve followed your instructions, this is the result:

myVeryLargeFile.ext
2,147,483,648 100% 28.02MB/s 0:01:13 (xfr#1, to-chk=0/1)

sent 2,148,008,028 bytes received 35 bytes 28,832,322.99 bytes/sec
total size is 2,147,483,648 speedup is 1.00

real 1m13.485s
user 0m3.219s
sys 0m7.984s

as you can see, the result in my case is very very poor, 32 Mb/s!

I’ve repeated the same test in an another PC with Arch Linux, the result is:

myVeryLargeFile.ext
2,147,483,648 100% 36.19MB/s 0:00:56 (xfr#1, to-chk=0/1)

sent 2,148,008,028 bytes received 35 bytes 38,017,841.82 bytes/sec
total size is 2,147,483,648 speedup is 1.00

real 0m56.764s
user 0m1.407s
sys 0m2.752s

just little better, but anyway poor…36 Mb/s!

In your opinion, where could be the bottleneck???

It’s not Nemo-related then.

I forgot to ask, did you do the above step on kernel 5.10 or 5.9?

Can you repeat the test on a different kernel and see if they are much different in speed?

You can re-use the same 2GB file, but make sure to invoke “sync” before transferring it via rsync.

Basically just delete the file from the SMB share and repeat the last two steps:

sync
time rsync -v --progress myVeryLargeFile.ext /run/media/username/gvfs/smb-share/

Later on you can remove this large 2GB file when you’re done testing. :slight_smile:

It’s more around 28 MB/s, as 2048 / 73 (1min13) = 28

If you’re getting around 28 to 32 MB/s from different computers with the large file rsync test, it might be related to how Samba is configured on your Ubuntu Server, or an option that is effecting each computer individually by coincidence.

It’s hard to trust the readings given by file managers, such as Nemo, Dolphin, Thunar, Windows Explorer, etc, especially when cache is involved.

It’s a bit out of the scope of this forum, but what version of Ubuntu Server / smbd is running on the server?

The first test that gives me 32 Mb/s is done in Manjaro with kernel 5.9, the second one in an Arch Linux machine with kernel 5.10 LTS. My ubuntu server is a 18.04.5 LTS (Bionic Beaver), samba version is 4.7.6-Ubuntu.

I want to say one thing: I am surprised seeing the kindness that I am meeting having exposed my problem in this forum for my very first time, in carrying out tests done with other distros and asserting that I have an ubuntu server (not Manjaro!), therefore with totally different types of distributions. I can assure you that in other forums, without making names, I have not found the same kindness and attention in listening to a user, I even saw a post of mine closed by forum’s admin despite all my correctness and calm in exposing my problem, respecting the rules, and for this I can only praise the work of the Manjaro community. Thank you!

2 Likes

For kicks, can you install the 5.4 LTS kernel via the linux-lts meta-package?

Then with the Grub menu, select Advanced Options and boot into Manjaro selecting the 5.4 kernel.

Run the test again.

This is going out of the scope of slow transfers with Nemo, but maybe it can lead you to a solution through the Ubuntu Server or some sort of offending smb/smbd setting.

Do you mean in the fresh install on external usb of manjaro 5.9? Yes, I’ll get to work right away…

Either or, but more important for your main Manjaro system.

Installing the meta-package linux-lts won’t bog down your system nor clutter it. :slight_smile: I keep it installed on mine at all times, for the off-chance I might need to boot into an older LTS kernel.

Kernel 5.4 LTS installed, rerun your test, sync etc etc, the result:

[giagio@dagonet Downloads]$ time rsync -v --progress myVeryLargeFile.ext /run/user/1000/gvfs/smb-share:server=server,share=public/
myVeryLargeFile.ext
2,147,483,648 100% 18.63MB/s 0:01:49 (xfr#1, to-chk=0/1)

sent 2,148,008,028 bytes received 35 bytes 19,264,646.30 bytes/sec
total size is 2,147,483,648 speedup is 1.00

real 1m50.333s
user 0m3.198s
sys 0m10.134s

it’s even worst than before!! I have no words! :frowning:

At this point I should do the test between arch based machines and bypass the ubuntu machine for the moment, to understand exactly the problem

Wow. Yeah. :frowning_face:

I have to run in a bit. In the meantime, look up how to check the smbd logs on your Ubuntu Server. Depending on the level of verbosity, it might hint at some clues, such as a lower version of the SMB protocol is being used for transfers, or perhaps a more peculiar error/warning.

Can you give a brief summary of the hardware in use, from the client (your Manjaro system) to the server? Is this being sent from an internal HDD / SSD or from a USB drive to the server? What setup is the server running?

Sometimes it’s a matter of one or more hardware bottlenecks.

Ok, I will further investigate and I will provide the client and server specs I am using within my home network. Thank you so much winnie.

I’ve done a test between manjaro kernel 5.9 or 5.4 LTS and arch linux kernel 5.10 LTS, bypassing a possible misconfiguration in Ubuntu server and it’s smbd. I’ve shared a folder on Arch Linux PC using Nemo Sharing Folder, then followed the same commands above, sync etc etc and finally the same poor result:

[giagio@dagonet Downloads]$ time rsync -v --progress myVeryLargeFile.ext /run/user/1000/gvfs/smb-share:server=192.168.1.16,share=shared/
myVeryLargeFile.ext
2,147,483,648 100% 23.64MB/s 0:01:26 (xfr#1, to-chk=0/1)

sent 2,148,008,027 bytes received 35 bytes 24,548,663.57 bytes/sec
total size is 2,147,483,648 speedup is 1.00

real 1m27.073s
user 0m3.578s
sys 0m10.634s

Are these SSDs or HDDs? Those are definitely low speeds for a home gigabit ethernet.

Install the package iperf3 on both systems, Manjaro and Arch. Make sure it’s iperf3, and not iperf.

On one system, have iperf3 listen for a speed test:

iperf3 -s

On the other system, initiate a speed test, which will last 10 seconds. Replace the IP address with that of the system running iperf3 in “server” -s mode.

iperf3 -c 192.168.1.xxx

What speeds does it sustain? Over 100 MB/s? (Remember that MB/s is multiplied by 8 to give you Mb/s. So 100 MB/s is approximately 800 Mb/s.)

If you’re getting low speeds with iperf3 between two local computers on the same network, then it might not be Samba related. If you’re getting fast speeds, then it could be storage hardware or Samba, or something else.

Where I run your test, all SSD disks :wink:

The result of iperf3 -c 192.168.1.16 is:

Connecting to host 192.168.1.16, port 5201
[ 5] local 192.168.1.18 port 54280 connected to 192.168.1.16 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 114 MBytes 953 Mbits/sec 0 1024 KBytes
[ 5] 1.00-2.00 sec 111 MBytes 933 Mbits/sec 0 1024 KBytes
[ 5] 2.00-3.00 sec 111 MBytes 933 Mbits/sec 0 1024 KBytes
[ 5] 3.00-4.00 sec 112 MBytes 943 Mbits/sec 0 1024 KBytes
[ 5] 4.00-5.00 sec 111 MBytes 933 Mbits/sec 0 1.05 MBytes
[ 5] 5.00-6.00 sec 112 MBytes 944 Mbits/sec 0 1.10 MBytes
[ 5] 6.00-7.00 sec 112 MBytes 944 Mbits/sec 0 1.10 MBytes
[ 5] 7.00-8.00 sec 112 MBytes 944 Mbits/sec 0 1.10 MBytes
[ 5] 8.00-9.00 sec 111 MBytes 933 Mbits/sec 0 1.10 MBytes
[ 5] 9.00-10.00 sec 112 MBytes 944 Mbits/sec 0 1.10 MBytes


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.09 GBytes 940 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 1.09 GBytes 938 Mbits/sec receiver

iperf Done.

so, It’s seems that my network is in good health…

Definitely does! Can you try it the other way, switching the roles of “-s” and “-c” of the two computers, just to rule something out?

Also what are the hardware specs of the systems involved? Do you have an external drive to run the rsync test to a temporary folder? If you’re getting 20 to 30 MB/s over a USB 3.0 connection, then this could be a buffer / hardware bottleneck.

Same result as above! :slight_smile:

Connecting to host 192.168.1.18, port 5201
[ 5] local 192.168.1.16 port 44446 connected to 192.168.1.18 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 113 MBytes 947 Mbits/sec 0 366 KBytes
[ 5] 1.00-2.00 sec 112 MBytes 938 Mbits/sec 0 366 KBytes
[ 5] 2.00-3.00 sec 111 MBytes 935 Mbits/sec 0 385 KBytes
[ 5] 3.00-4.00 sec 111 MBytes 935 Mbits/sec 0 547 KBytes
[ 5] 4.00-5.00 sec 112 MBytes 938 Mbits/sec 0 547 KBytes
[ 5] 5.00-6.00 sec 112 MBytes 938 Mbits/sec 0 547 KBytes
[ 5] 6.00-7.00 sec 111 MBytes 929 Mbits/sec 0 547 KBytes
[ 5] 7.00-8.00 sec 112 MBytes 939 Mbits/sec 0 573 KBytes
[ 5] 8.00-9.00 sec 111 MBytes 932 Mbits/sec 0 573 KBytes
[ 5] 9.00-10.00 sec 112 MBytes 940 Mbits/sec 0 604 KBytes


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.09 GBytes 937 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 1.09 GBytes 934 Mbits/sec receiver

iperf Done.

I edited too late. I wrote this above.

Otherwise, most hints are pointing to Samba.