Selecting a mounted NAS on Dolphin causing high network activity

Many thx for your time spent on the response :smile:

I understand how to do just about everything you posted (which amazes me ^^ Although not sure what ARC is but i have google).

This is a thrown together nas built from junk. Its my first time and its in a bit of a muddle, however, i never really intended to use it this way in the long term. It was experimental. I am about to buy some WD reds+ or Ironwolf pro’s to go in (im pondering about a complete new build as well, but not sure), anyhow, when i get the new drives all this will be wiped and i will re-build/re-install. Obviously, i will keep a mind on your post when building it.

For now i can try the first part of your post. Thank you.

This isn’t the case, whether for Dolphin or the command-line.

The hidden .zfs “folder” is just a convenient way to present the snapshots that exist on a ZFS dataset, in which it can also be exposed to an SMB client. (This is a way to bridge a “block-based” filesystem with software and layers that are “file-based”.)

However, unless you explicitly navigate or target this special “folder”, no software will traverse it.

If you look through this special folder on the server itself, you’ll notice that the same file across different snapshots uses the same inode number.


Your screenshot shows strange behavior. Even with hidden items visible, the special .zfs “folder” should not be visible.

It should, i purposefully made it visible. As i said this is an experiment, a learning process. I wanted to see what was going on inside .zfs and i find it easiest to use GUI file explorer to browse files.

How?

Whether using a graphical file manager (Windows Explorer, Dolphin, Nemo, etc) or the terminal, this “folder” should not be visible nor listed, even if you enable “view hidden items” or use the -a flag in a ls command.


Even when doing tests, you don’t want to stray too far from defaults or best practices, otherwise you’ll go off on tangents and not get a fair idea of what to experience when you use it in your daily workflow.


To browse within the special .zfs folder, you are expected (by design) to manually type the path in the file browser’s address bar or in the terminal.

It’s not meant to be casually traversed or entered by normal means, even if you issue recursive commands.

This is why, for example, if you issue a find command on the dataset or share, it will by default look in the entire dataset recursively, yet it will not traverse into the special .zfs “folder”. However, if you issue the same exact find command, but specifically target .zfs, then it will indeed crawl through all your snapshots.

This is all by design from upstream ZFS.

The same is also true on the server itself. It’s not limited to SMB or NFS shares.

Well , i have made them all invisible now.

I have also done all your suggestions from above. I think it is starting to populate some of the deeper folders and it seems to be working its way up the hierarchy, im not 100% sure yet, but i shall just leave it open and indexing through the NAS for the evening and see what happens.

1 Like

Ok so unfortunately it run all night and was actually finished this morning. However, as soon as i closed Dolphin and opened it again it started indexing all over again.

I have tried the following without any problems.

  • PcManFM
  • Krusader
  • MC

None of these try to “index” the size of the folders in my Truenas. None of them go on and on grinding away at my NAS.

It looks to me a problem with the way Dolphin deals with detailed view, trying to index info on all the files and folders before listing them properly.

Just tried Nemo, it works even better than everything else. It does populate the folders with number of files and it does it just about instantly.

Pity i dont like Nemo, i use KDE for many reasons and the file browsers was/is one of them. I just prefer KDE.

Heh. I get it. Then use it only for your NAS, and only as a workaround 'til this is figured out.

Something odd is going on.

I have KDE on two different machines, one is still on Plasma LTS 5.24.7, and the other is using Plasma 5.26.3. They both behave the same.

I have a share from my TrueNAS that has (I kid you not) a total of nearly 980,000 files and folders. When I open the share in Dolphin, even with the details view (just like in your screenshots), I get a blip of maybe a few hundred KiB/s on my network for only a couple seconds, and then it’s done. Quiet. Nothing. Browsing the share, listing its contents, doing scans and searches recursively happens extremely fast (and almost instantly.) In fact, it’s on par with using a local drive directly attached to the computer.

This behavior holds true even if I enable **“Size of contents, up to 20 levels deep” in Dolphin’s view preferences.


I think I know what might be going on, and it’s that Dolphin is simply “unearthing” this underlying behavior in your case.

But first, just to review and confirm:

:white_check_mark: You’re using the Tuneable to allow for a much more sensible ARC in retaining your ZFS metadata? It is not only applied, but your ARC has filled up with a good amount of metadata? (I explain how to check this at the end of the post.)

:white_check_mark: You’re leaving the special snapshot “folder” as insivible?

:white_check_mark: You’re using cache=loose on your mount parameters for the client?

:warning: Please stop sharing the root dataset. I beg of you. It shouldn’t be part of these tests anyways.

:question: You notice the same behavior in Dolphin, even without the “Details view”?


On your TrueNAS server, assuming you haven’t rebooted it (which will empty the ARC), what do these commands yield:

arc_summary | grep "Metadata cache"
sysctl kstat.zfs.misc.arcstats.metadata_size
sysctl vfs.zfs.arc.meta_min

How much physical RAM does your TrueNAS server have?


Before I jump to my assumption, the above information is important, and I’ll explain what you’re likely witnessing.


EDIT:
To elaborate further, what you’re seeing is likely an inherent design flaw in KDE’s Dolphin (regarding a FOURTEEN YEAR OLD BUG :face_with_symbols_over_mouth:) which has still not been fixed upstream. It’s more noticeable for you due to your client-server setup (and likely your NAS server configuration and/or limitation.)

In other words, it’s a design flaw of Dolphin that is obvious to you due to extraneous factors.

The reason I’m not experiencing it (even with an SMB share of 980,000 files and folders) is because my server is reading directly from physical RAM without accessing the drives. The only time it needs to read from the drives is when I actually open up a file, such as a video or picture. Otherwise browsing, traversing, listing, scraping, searching, etc, is all done in RAM, which is blazing fast.

1 Like

Yep, did that yesterday and rebooted the nas directly afterwards. It’s been on since then.

Yep

Yep

//192.168.2.102/main on /mnt/nas/TRUENAS type cifs (rw,nosuid,nodev,noexec,relatime,vers=3.1.1,cache=loose,username=jackdinn,uid=1000,noforceuid,gid=1000,noforcegid,addr=192.168.2.102,file_mode=0755,dir_mode=0755,iocharset=utf8,soft,nounix,mapposix,noperm,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=5,user,_netdev)

This one i really dont want to mess with as things are as i’d have to rearrange my clients paths, sync’s, backups, cloud testing, dns filter. Various things that i dont really want to mess with until i rebuild this thing with new drives and a new install.

8GB

Ekkk. not a lot. it was all i could find for my experiment, but i didnt even expect that i’d get the thing working at all LOL.
So i suspect that this is going to be the cause of the problem!

ksnip_20221202-215011

❱ssh jackdinn@192.168.2.102
Last login: Thu Dec  1 22:45:08 2022 from 192.168.2.106
FreeBSD 13.1-RELEASE n245376-eba770b30ff TRUENAS

TrueNAS (c) 2009-2022, iXsystems, Inc.
All rights reserved.
TrueNAS code is released under the modified BSD license with some
files copyrighted by (c) iXsystems, Inc.

For more information, documentation, help or support, go here:
http://truenas.com
Welcome to TrueNAS

truenas:/mnt/main/shares/jackdinn
$ arc_summary | grep "Metadata cache"
Metadata cache size (hard limit):              75.0 %    5.0 GiB
Metadata cache size (current):                 16.9 %  868.1 MiB

truenas:/mnt/main/shares/jackdinn
$ sysctl kstat.zfs.misc.arcstats.metadata_size
kstat.zfs.misc.arcstats.metadata_size: 503561216

truenas:/mnt/main/shares/jackdinn
$ sysctl vfs.zfs.arc.meta_min
vfs.zfs.arc.meta_min: 4294967296

This is likely the reason for it, and the minimum physical RAM for a TrueNAS server is 16GB (some will argue it’s 32GB now).


So here’s what’s probably happening in your situation, which is a combination of factors:

You don’t have enough physical RAM to comfortably house userdata and metadata in the ARC, among other services and other non-ZFS cache. Metadata (which is used for crawls) is likely being shoved out of the ARC, with the limited physical RAM you have for a TrueNAS server.

Where Dolphin comes into play is its poor design, with an outstanding bug that was known about 14 years ago, and apparently is still an issue today. :pensive:

When you use the “Details” view in Dolphin, it crawls into all the subfolders (in the background) to inspect and analyze how many files are within, and whether or not a directory is empty. This demands a lot of metadata from the server.

Ideally, this metadata will be in the server’s ARC, and is rapidly dealt with. (Which might explain why it’s snappy and responsive for me.)

In your case, it’s likely that much of it needs to be pulled from the drives, which may account for the lag and delay you’re experiencing.


A few possible workarounds:

  • Increase the physical RAM of your TrueNAS server (I’d recommend 32GB), while still using the same Tuneable to prevent aggressive metadata purging from the ARC.

  • Use the “Icons” view in Dolphin. Not “Compact”. Not “Details”.

  • Don’t use Dolphin (which would be a bummer).

  • Don’t share the root dataset over SMB, since Dolphin is automatically crawling into the iocage dataset, which is wasteful.


Apparently, the KDE developers are aware of this (it’s a “KIO” issue, not Dolphin itself), and it will not be addressed (if at all) until KDE 6.

1 Like

Brilliant work @winnie,

It all makes sense. I have been thinking about a whole new (well not new but a better foundation that the medieval mobo i have been using which BTW can not even accept more than 8GB at all, believe that!), maybe even a brand new mobo and all the crap to go with it. I just can’t make up my mind.

At least i know what is going on now thx to your work and good memory for these kind of things :wink:

Thank you.

Sadly, the core issue still lies at the feet of KDE.

It’s not limited to network shares, or even low-end servers. It’s just more noticeable with such setups.

The same issue exists with local storage, but it’s not as noticeable since it’s directly on the actual machine itself. Plus, most people have SATA SSDs and NVMe drives these days. Newly purchased laptops are nearly all pure SSD now.

In fact, one of the KDE developers mentioned that because most people have solid-state drives, this “issue” is not as high of a priority, and can wait until KDE 6, since trying to address the underlying issue will require a lot of rewriting of code that can break other things.

Technically, I also “experience” it, but it’s not as noticeable as it is for your setup, which is why I didn’t even realize it was happening to me too. :smile:

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.