Selecting a mounted NAS on Dolphin causing high network activity

When i select a mounted NAS drive on Dolphin it starts using my network constantly. I suspect its doing some kind of indexing or trying to calculate all the file sizes and how many files there are or something like that but it just does not stop until i close Dolphin. Even if i change back to my home directory or any other dir it still continues to use the network.

If i close Dolphin while on my NAS then the next time i open Dolphin again it will be on my NAS drive and it will start the network activity again even if i switch straight out of the NAS. I end up having to change to my home drive, close Dolphin and open it again just to stop the network activity.

How do i stop it from doing this?

This is its mount:-

❱mount | grep TRUE
// on /mnt/nas/TRUENAS type cifs (rw,nosuid,nodev,noexec,relatime,vers=3.1.1,cache=strict,username=jackdinn,uid=1000,noforceuid,gid=1000,noforcegid,addr=,file_mode=0755,dir_mode=0755,iocharset=utf8,soft,nounix,mapposix,noperm,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=5,user,_netdev)

Please try:

sudo balooctl status

while there’s high network activity to see if baloo file indexer is trying to index the whole NAS.

❱sudo balooctl status
[sudo] password for greg:
Baloo Index could not be opened

Its where Dolphin is trying to calculate the file sizes (and maybe other stuff), I have seen them come up one at at time very slowly on some of my other NAS’s and when its finished the network activity stops. Its just that on my TrueNAS it never stops. (well, i suppose it would eventually finish, but it seems like it would take hours for truenas, maybe because of the zfs and how the files are duplicated many many times over)

1 Like

Hi @jackdinn,

It might be baloo, it might not. I don’t know.

However, you should be able to test this. Simply disable baloo and see if it helps:

balooctl disable

Just test a bit without restarting, if it is that, we can change some settings.

O i was just about to post

❱balooctl status
kf.i18n: KLocalizedString: Using an empty domain, fix the code. msgid: "Unknown" msgid_plural: "" msgctxt: ""
kf.i18n: KLocalizedString: Using an empty domain, fix the code. msgid: "Idle" msgid_plural: "" msgctxt: ""
Baloo File Indexer is running
Indexer state: Idle
Total files indexed: 14,143
Files waiting for content indexing: 0
Files failed to index: 6
Current size of index is 154.89 MiB

This is during the network activity, disabling it does not stop the network activity either, unfortunately :frowning:

Hmmmmmm…OK. Well, then let’s see if we can find out what is causing the traffic. Install nethogs. It’s in the community repository:

$ pamac search nethogs                                                                                                                                                                                                       
[Installed] 0.8.7-1                 community
A net top tool which displays traffic used per process instead of per IP or interface

So install it with:

pamac install nethogs

It has to be run in a terminal emulator with sudo. So open your terminal, and run it:

sudo nethogs

It should show you the processes using the network, and from that we should be able to deduce the reason for the traffic.

To exit nethogs, press q while it has focus.

Can you use ‘top’ command to see which process is working hard in that moment?

I already have nethogs. It just shows traffic coming from my NAS to my node (computer)

(Sorry, have to be a screen shot as i can’t copy from it)

1 Like

Nothing is using anything (processor wise) not really.

Last minute load average in 1.84…?

OK, according to me, this :point_down:

Isn’t a lot, anyway. However, compared to mine : :point_down:

it is. So my next question, how do you access the NAS? Perhaps it would be better to have a systemd sutomount unit for it:

Other than that, :man_shrugging:

Im not going back over the “how to mount NAS” debacle again, it gave me a 4 day headache last time i tried ^^
Im pretty sure its the way Dolphin tries to “look” at every file and how it deals with my nas, especially my zfs one which has every file duplicated 100’s if not 1000’s times due to the snapshot’ing that zfs is designed for.

I just need a way to tell dolphin to not try and calculate the file sizes on specific mount paths like /mnt for example.


well i suppose Dolphin is showing at the top on here, but hardly using a lot.

1 Like

Sorry, im not sure what you mean. I mount it with fstab and access it via dolphin and ssh and ftp and smb etc

cat /etc/fstab | grep TRUE
// /mnt/nas/TRUENAS cifs user,nofail,credentials=/home/greg/.smbcredentials-nas,iocharset=utf8,uid=1000,gid=1000,noperm,_netdev 0 0

OK, so that looks like SAMBA. You could unmount it and see if that fixes it? That way, you’d at least confirm whether it is, indeed, the NAS.

Another option is to turn on per-folder details view in Dolphin, and hiding that column. Maybe that will work, maybe not. But there is only one way to find out.

O right, sorry misunderstood you. Yes this mount that i am referring to on Dolphin is a SMB mount.

Which it would because it is only when i point Dolphin to look at that particular mount that it does it, so if i unmount it there would be nothing to look at. All other folders work fine.
Other SMB NAS mounts do take some time to populate the file size column, but my truenas one is the only one that goes on and on for ever.

Windows network protocols is a very chatty - especially SMB1 - it got better with SMB2.

I suggest you comment the mount in your fstab.

Then create a mount unit and use a complementary automount unit to activate the share when the folder is accessed. Doing so you can configure an activity timeout which then will unmount the share.

If the issue persist - it is likely a configuration on your server which keeps the connection alive - e.g. SMB1 aka NT1

These mount options is known to bo possible troublemakers.

Warning: Using uid and/or gid as mount options may cause I/O errors, it is recommended to set/check correct File permissions and attributes instead.
Samba - ArchWiki

Am i getting old, where is this?

Then try hiding that column for that certain directories:

  1. Turn on per directory settings:
    :hamburger: Configure Configure dolphin

On the General screen, in the Behaviour make sure Remember display style for each folder is selected:

Click OK to save and close the window.

  1. Next, open the directory, and Right-Click somewhere oon the column headings, and deselecting the Size option:

Restart KDE, or reboot, and see if it helped.

If that didn’t help, sorry.


But I’m out of ideas then.

1 Like

Yea, sorry my bad i was looking for a setting called exactly “per-folder details view”. I already have all my folders as individual setups and i did try removing the size column but it still wanted to do the same.
I was wondering if i tinkered with a similar setting it might help, but it didn’t matter what i changed here it still does it. Thx for your support though :slight_smile:

All interesting stuff. I shall give it a rest for now, it hurts my head just thinking about going back to “how to mount a nas”. But i shall take a look later, thx.

1 Like

On your TrueNAS Core server:


Variable: vfs.zfs.arc.meta_min
Value: 4294967296
Type: sysctl
Comment: Allow wider metadata retention in ARC

The changes will apply upon rebooting the server, or you can immediately apply it with the following command on the server:

sysctl vfs.zfs.arc.meta_min=4294967296

The next time your ARC fills up with metadata, it should remain cache’d for significantly longer.

The first time you won’t see any difference. However, keep using your SMB shares, and over time you’ll notice it’s more responsive; less time spent on crawling your server’s drives.

You can even force metadata into the ARC by issuing a find command on the server, such as:

find /mnt/poolname > /dev/null


du -hs /mnt/poolname

Use “sudo” if logged via SSH as a regular user.

(The above two commands are not necessary. In fact, I’d skip them for now. You don’t want to waste this on iocage. Just create the Tuneable and apply the sysctl value without having to reboot first.)

Use cache=loose in the mount parameters:

If you’re the only person accessing the SMB share(s), you can also safely use cache=loose rather than cache=strict in the mount options for better performance. (This makes a huge difference.) You won’t notice the effect until after some usage, as your client will have more data and metadata cache’d in local RAM.


You shouldn’t be sharing the root dataset via SMB. Bad practice, and can bump into permission issues. Plus, you’re forcing the server to waste cache on metadata that is almost pointless to the end-user, such as everything tucked under iocage.

Instead, share a child dataset or specific dataset(s) with files you need to access over SMB. The fact that you have the entirety of iocage exposed via SMB is only making your issue worse. If you really need to access everything under iocage via SMB (for some reason), then create a separate share for it; or just access it via SSH and/or the iocage command on the server.