What I can’t understand is why the tests conducted with Manjaro and gvfs-smb 1.46.1-1 gave decent transfer rates (70-75 Mb/s), while with the update to version 1.48.0-2 everything got worse (40 Mb/s). They are inexplicable things, like the mysteries of the universe!
Maybe in the next 4 years you will have it.
I think they don’t care, they think half the maximum throughput is enough
After all, i think most users don’t care either, and advanced users do not use gvfs a lot and prefer to do their mount manually with fstab or with a systemd unit.
I personally no longer use smb protocol because i do not have any windows machine at home anymore
@yannssolo, I understand…from what you say there is no hope of fixes! Anyway, I will try the fstab method.
The thing I don’t like about fstab is writing plain text passwords in a text file, I don’t have any status secrets in my server and anyway is not exposed to the external network, but still it’s not too nice for my taste.
Anyway, if I remember correctly there is some way not to put the password in clear, I made it some time ago, I will have to search on the net.
Are they planning to remove them? They serve a wonderful role, in my opinion. Have the latest copy of the designated LTS and STABLE kernels. Updates have been a breeze for me.
That’s awesome to see! So now like @yannssolo mentioned, you have some options. Using systemd mount units, you can configure “auto” mounts based on attempting to access the share/folder, idle timeouts, among other options.
The “old-school way” to do this was either directly in the fstab (not always preferable, since the network share might not always be available), or with the autofs tools.
However, @yannssolo’s proposition looks more appealing as systemd is newer, yet matured and stable. I have never tried it before, but I’m interested to give it a shot, even just for the sake of learning. I might even end up using it.
@winnie fstab allows you automout too, or waiting the remote server to be available by delaying the mount a little bit…see the fstab section there systemd.mount
Then, the smart way is to store username and password in an external file and set permission 600 (rw-------). Instead of having username and password in your fstab mount, you have something like this:
The advantage to do it in a systemd unit is that you can view the logs easily and they are more explicit than if using fstab.
In my opinion, not to have to edit your fstab is better, because it’s safer.
Look at the “time” results for “real time”. The test took about 20 seconds, which translates to around 103 MB/s. (2048 MB file divided by 20 seconds is about 103 MB/s.)
That’s why I advised @wuwei to use “time” for the rsync tests. Sometimes rsync spits out crazy “speeds” near the tail-end of a transfer.
After reading the github issue at the link that @yannssolo suggested (large file throughput via gvfs-smb is over 2x slower than smbclient or fstab mount.cifs) I have to make a necessary correction; the issue is not related to gvfs-smb, but gvfs, indeed seems that gvfs lost an important parameter, -o big_writes:
the difference is the -o big_writes option that’s present in that case.
It would be interesting to know if it’s possible to pass the -o big_writes parameter to gvfsd-fuse in Manjaro.
NAME
gvfsd-fuse - Fuse daemon for gvfs
OPTIONS
-o OPTION
Set a fuse-specific option. See the fuse documentation for a list of these.
UPDATE: big_writes is no more present in fuse consulting its man, so this is because that parameter is no more used.
@wuwei, at this point I believe you’ll get the best performance using systemd mount units, and protecting the username/password for the network share by doing what @yannssolo suggests, in making it read-write (or even read-only) for only your user account.
You can always play around with putting shortcuts to these shared network folders on your Nemo sidepane or Desktop or wherever.
Thanks man, after hours of changing samba config options and testing around with different benchmarks to test network and disk speeds I finally have an answer.
Copying a file in a normal samba share in nautilus gave me speeds of around 25MB/s.
Mounting it with cifs I now get around 110MB/s which is pretty much at the maximum for my network.
I would have never found out…
/ the problem is, that it’s hard to find this thread or anything related to this bug, when you search for „samba share too slow“ or something. Because that search results in hundreds of results talking about samba socket settings and such.