/home on a separate HDD. Nautiuls/file picker slooooow to open

I’m one of these people that finds myself re-installing my OS at least once or twice a year, either because of a wonky update, or having hosed something myself in my own tinkering.

To facilitate that process, I’ve decided to run with my /home directory on a separate 3TB HDD that I was using for storage anyway, and just hard-linking things like my Documents/Videos/Pictures folders to those locations.

So now my setup is
/ on an internal M2 1TB SSD
/home on an internal 3.5" 3TB HDD (SATA, 7200rpm, 6gb/s)

My problem is that Nautilus and the gnome file-picker are agonizingly slow to open. For example, sitting with nothing running at an empty desktop fresh from a boot up, I click on the Nautilus icon on the bar, and it takes a solid 10-20 seconds for the window to open. Using the file picker (e.g., in Firefox or Gimp or Libreoffice) is 10x worse. Like, I can go to the kitchen, get a glass of cold water, come back, sit down, and still have to wait another 10-15 seconds for it to open.

Several question arise from this:

  1. Is this normal when /home is on a separate partition/drive?
  2. What’s causing this?
  3. How do I make it stop?
    …and so on.
  1. that is definitly not normal.
  2. the old profile doesnot match the new profile, (specially the “.xyxzx” drawers)
    try to make a new profile or even a new installation of everything (including / and /home)
    and copy your data (dokuments, video, downloads and so on.

Before that search the forum for a similar problem - with separate /home or not separate /home…

1 Like

Hi @yeahgreen,

Nope. Never had it.

Some kind of misconfiguration.

Correct the misconfiguration.

As an aside:

This:

…shouldn’t be possible AFAIK. AFAIK they should/can’t be used cross filesystems, so the should all be on the same partition if they are to be hard-linked. Else it’s soft-links.

I also have my Documents, Downloads, Pictures and Videos separate, on a spinning disc and not my SSD, and it’s instant.

But don’t use links, I don’t. Instead, I use, and recommend bind mounts. And for those I recommend systemd automount units. You can find more here:

Using bind mounts also makes it very easy to back up and restore. And on that subject:

That was me as well. Well, Manjaro and Timeshift changed that. It’s now been more than 2/3 years since I did a reinstall. I’d recommend you start making, maintaining and using regular updates. Timeshift is a lifesaver for that.

2 Likes

No, it’s not. Or at least, not if you’re using a Linux-native filesystem like ext4, btrfs, xfs, jfs, f2fs or yaffs2.

If on the other hand you’re one of those people who for whatever crazy reason chose ntfs for your /home, then all bets are off, because ntfs is not POSIX-compatible — or at least, not without some serious tweaking, and even then still… :roll_eyes:

It could be the bug with xdg-desktop-portal-gnome, although in theory it should not affect GNOME proper — note: if you’re running either Cinnamon or MATE, then those are not GNOME.

You could try this:arrow_down:

sudo pacman -Rdd xdg-desktop-portal-gnome && sudo pacman -S xdg-desktop-portal-gtk

…Then you’re a Masochist. You don’t even need to go to a shrink to confirm. :stuck_out_tongue_winking_eye:

2 Likes

Good Lord, no. Everything is ext4. Sorry. I should have mentioned that in the OP.

OK. I guess I got my terminology mixed up/backwards.
The heart of it is yes, previously the entire OS was installed on SSD /dev/nvmeABCD (or whatever). I then deleted the existing ~/Documents, ~/Pictures, ~/Videos and used ln -s to link to my actual folders on a spinning 3TB spinning HDD. That worked seamlessly and flawlessly. It was fast, snappy, and had no issues.

Gave it a shot. Didn’t seem to have any effect.

Maybe, rather than chase geese, it’s easier at this point to just move /home back on to the SSD and go back to my previous arrangement. How hard is that? I’ve heard of it being done, but never saw a write-up on it or anything.

Well, if that is your choice, then that is your choice. See below… :arrow_down:

The best way would be to boot up from the live USB and issue the following commands… :arrow_down:

sudo su -
mkdir /mnt/rootfs
mkdir /mnt/home
mount -t ext4 /dev/your-root-partition-here /mnt/rootfs
mount -t ext4 /dev/your-home-partition-here /mnt/home
mv -v /mnt/home/* /mnt/rootfs/home/
nano /mnt/rootfs/etc/fstab   # comment out the line for your /home by putting a "#" in front of it
                             # save the file with Ctrl+X, Enter, and exit nano with Ctrl+O
sync

Be sure to let everything finish. Once you’re done, safely reboot the system.

This is what journalctl -f spits out when I try to open a nautilus instance.

Sep 01 16:09:12 one-desktop gnome-shell[62376]: Can't update stage views actor <unnamed>[<MetaWindowActorX11>:0x56507c350750] is on because it needs an allocation.
Sep 01 16:09:12 one-desktop gnome-shell[62376]: Can't update stage views actor <unnamed>[<MetaSurfaceActorX11>:0x56507c3793d0] is on because it needs an allocation.
Sep 01 16:09:15 one-desktop dbus-daemon[62238]: [session uid=1000 pid=62238] Activating service name='org.gnome.Nautilus' requested by ':1.21' (uid=1000 pid=62376 comm="/usr/bin/gnome-shell")
Sep 01 16:09:15 one-desktop nautilus[706083]: Connecting to org.freedesktop.Tracker3.Miner.Files
Sep 01 16:09:15 one-desktop dbus-daemon[62238]: [session uid=1000 pid=62238] Successfully activated service 'org.gnome.Nautilus'
Sep 01 16:09:15 one-desktop dbus-daemon[62238]: [session uid=1000 pid=62238] Activating via systemd: service name='org.gtk.vfs.GoaVolumeMonitor' unit='gvfs-goa-volume-monitor.service' requested by ':1.1484' (uid=1000 pid=706083 comm="/usr/bin/nautilus --gapplication-service")
Sep 01 16:09:15 one-desktop systemd[62218]: Starting Virtual filesystem service - GNOME Online Accounts monitor...
Sep 01 16:09:15 one-desktop kernel: gvfs-goa-volume[706111]: segfault at 7f7f748be768 ip 00007f7f72c82e1d sp 00007fff00451210 error 4 in libgoa-1.0.so.0.0.0[7f7f72c60000+26000] likely on CPU 17 (core 6, socket 0)
Sep 01 16:09:15 one-desktop kernel: Code: 00 48 8b 05 d5 a5 01 00 48 83 c4 08 5b 5d c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 55 48 8d 3d e5 60 00 00 53 48 83 ec 28 <64> 48 8b 04 25 28 00 00 02 48 89 44 24 18 31 c0 ff 15 35 9a 01 00
Sep 01 16:09:15 one-desktop systemd[1]: Started Process Core Dump (PID 706115/UID 0).
Sep 01 16:09:15 one-desktop systemd-coredump[706116]: [🡕] Process 706111 (gvfs-goa-volume) of user 1000 dumped core.
                                                      
                                                      Stack trace of thread 706111:
                                                      #0  0x00007f7f72c82e1d n/a (libgoa-1.0.so.0 + 0x31e1d)
                                                      #1  0x00007f7f72c83945 goa_object_proxy_get_type (libgoa-1.0.so.0 + 0x32945)
                                                      #2  0x00007f7f72b9d6f5 n/a (libgio-2.0.so.0 + 0x1226f5)
                                                      #3  0x00007f7f72b9dcc5 n/a (libgio-2.0.so.0 + 0x122cc5)
                                                      #4  0x00007f7f72b9e045 n/a (libgio-2.0.so.0 + 0x123045)
                                                      #5  0x00007f7f72af0447 g_initable_new_valist (libgio-2.0.so.0 + 0x75447)
                                                      #6  0x00007f7f72af052e g_initable_new (libgio-2.0.so.0 + 0x7552e)
                                                      #7  0x00007f7f72c84072 goa_object_manager_client_new_for_bus_sync (libgoa-1.0.so.0 + 0x33072)
                                                      #8  0x00007f7f72c84172 n/a (libgoa-1.0.so.0 + 0x33172)
                                                      #9  0x00007f7f72af0447 g_initable_new_valist (libgio-2.0.so.0 + 0x75447)
                                                      #10 0x00007f7f72af052e g_initable_new (libgio-2.0.so.0 + 0x7552e)
                                                      #11 0x000056317ad141e7 n/a (gvfs-goa-volume-monitor + 0xa1e7)
                                                      #12 0x000056317ad178f0 n/a (gvfs-goa-volume-monitor + 0xd8f0)
                                                      #13 0x000056317ad120e6 n/a (gvfs-goa-volume-monitor + 0x80e6)
                                                      #14 0x00007f7f72627cd0 n/a (libc.so.6 + 0x27cd0)
                                                      #15 0x00007f7f72627d8a __libc_start_main (libc.so.6 + 0x27d8a)
                                                      #16 0x000056317ad121b5 n/a (gvfs-goa-volume-monitor + 0x81b5)
                                                      
                                                      Stack trace of thread 706112:
                                                      #0  0x00007f7f7270ee2d syscall (libc.so.6 + 0x10ee2d)
                                                      #1  0x00007f7f72d4dca7 g_cond_wait (libglib-2.0.so.0 + 0xafca7)
                                                      #2  0x00007f7f72cc3144 n/a (libglib-2.0.so.0 + 0x25144)
                                                      #3  0x00007f7f72d2d2fe n/a (libglib-2.0.so.0 + 0x8f2fe)
                                                      #4  0x00007f7f72d2ad75 n/a (libglib-2.0.so.0 + 0x8cd75)
                                                      #5  0x00007f7f7268c9eb n/a (libc.so.6 + 0x8c9eb)
                                                      #6  0x00007f7f72710ebc n/a (libc.so.6 + 0x110ebc)
                                                      
                                                      Stack trace of thread 706114:
                                                      #0  0x00007f7f7270365f __poll (libc.so.6 + 0x10365f)
                                                      #1  0x00007f7f72d55c2f n/a (libglib-2.0.so.0 + 0xb7c2f)
                                                      #2  0x00007f7f72cf7fef g_main_loop_run (libglib-2.0.so.0 + 0x59fef)
                                                      #3  0x00007f7f72b8b28c n/a (libgio-2.0.so.0 + 0x11028c)
                                                      #4  0x00007f7f72d2ad75 n/a (libglib-2.0.so.0 + 0x8cd75)
                                                      #5  0x00007f7f7268c9eb n/a (libc.so.6 + 0x8c9eb)
                                                      #6  0x00007f7f72710ebc n/a (libc.so.6 + 0x110ebc)
                                                      
                                                      Stack trace of thread 706113:
                                                      #0  0x00007f7f7270365f __poll (libc.so.6 + 0x10365f)
                                                      #1  0x00007f7f72d55c2f n/a (libglib-2.0.so.0 + 0xb7c2f)
                                                      #2  0x00007f7f72cf60e2 g_main_context_iteration (libglib-2.0.so.0 + 0x580e2)
                                                      #3  0x00007f7f72cf6132 n/a (libglib-2.0.so.0 + 0x58132)
                                                      #4  0x00007f7f72d2ad75 n/a (libglib-2.0.so.0 + 0x8cd75)
                                                      #5  0x00007f7f7268c9eb n/a (libc.so.6 + 0x8c9eb)
                                                      #6  0x00007f7f72710ebc n/a (libc.so.6 + 0x110ebc)
                                                      ELF object binary architecture: AMD x86-64
Sep 01 16:09:15 one-desktop systemd[62218]: gvfs-goa-volume-monitor.service: Main process exited, code=dumped, status=11/SEGV
Sep 01 16:09:15 one-desktop systemd[62218]: gvfs-goa-volume-monitor.service: Failed with result 'core-dump'.
Sep 01 16:09:15 one-desktop systemd[1]: systemd-coredump@196-706115-0.service: Deactivated successfully.
Sep 01 16:09:15 one-desktop systemd[62218]: Failed to start Virtual filesystem service - GNOME Online Accounts monitor.
Sep 01 16:09:28 one-desktop NetworkManager[762]: <info>  [1693598968.3014] manager: NetworkManager state is now CONNECTED_SITE
Sep 01 16:09:28 one-desktop dbus-daemon[744]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.3' (uid=0 pid=762 comm="/usr/bin/NetworkManager --no-daemon")
Sep 01 16:09:28 one-desktop systemd[1]: Starting Network Manager Script Dispatcher Service...
Sep 01 16:09:28 one-desktop dbus-daemon[744]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Sep 01 16:09:28 one-desktop systemd[1]: Started Network Manager Script Dispatcher Service.
Sep 01 16:09:38 one-desktop systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Sep 01 16:09:40 one-desktop org.gnome.Nautilus[706083]: Error creating proxy: Error calling StartServiceByName for org.gtk.vfs.GoaVolumeMonitor: Timeout was reached (g-io-error-quark, 24)
Sep 01 16:09:41 one-desktop dbus-daemon[62238]: [session uid=1000 pid=62238] Activating service name='org.gnome.DiskUtility' requested by ':1.1484' (uid=1000 pid=706083 comm="/usr/bin/nautilus --gapplication-service")
Sep 01 16:09:41 one-desktop dbus-daemon[744]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.6638' (uid=1000 pid=706083 comm="/usr/bin/nautilus --gapplication-service")
Sep 01 16:09:41 one-desktop gnome-shell[62376]: Window manager warning: Buggy client sent a _NET_ACTIVE_WINDOW message with a timestamp of 0 for 0x3800004
Sep 01 16:09:41 one-desktop dbus-daemon[62238]: [session uid=1000 pid=62238] Successfully activated service 'org.gnome.DiskUtility'
Sep 01 16:09:41 one-desktop systemd[1]: Starting Hostname Service...

That’s a core dump of gvfs, the GNOME virtual filesystem. I can’t really help you with that — I’m on Plasma and I have exorcised gvfs from my system completely, in spite of repeated attempts from the Manjaro devs to push it back onto my system as some sort of dependency. :stuck_out_tongue:

1 Like

Why? create a partition on the spinning disk and mount it as /home in /etc/fstab???

That is a symbolic link, and those can cross filesystems, a.k.a. partitions.

That would work if you use the disk only for /home.

With the bind-mounts option I suggested, you can mount, and thus use, directories on the drive separately.

For example, consider a spinning disk mounted at /mnt/5tb containing the directories Documents, Pictures and Music.

You can bind-mount your ~/Documents, ~/Pictures and ~/Music to the corresponding directories on the partition mounted at /mnt/5tb.

Please explain

Meaning, if you don’t want to use the disk for anything that’s not in ~/.

The reason I suggested, and use the bind-mount way, is with the bind-mount, all configuration files are kept on the SSD, so there’s not performance hit. Only the large files/directories are on the spinning discs. So it doesn’t impact startup time all that much, if any.

So it won’t really work if you’re trying to save something on it that’s not for the ~/ directory, because everything on it, will end up in your ~/ directory.

What @Mirdarthos is talking about, is separating the documents, pictures, movies et al, from the actual files in one’s ${HOME} that are part of the user-generated configuration, such as ~/.bash*, ~/.zsh, and anything under ~/.config, ~/.local, et al.

So the contents of /home and one’s own configuration files would still reside on the SSD, but the documents et al would be stored in a separate partition on the HDD.

2 Likes

Just to bring everyone up t date:
The problem is gone. Nautilus windows and application file-pickers are now launching in what I would call reasonable times.
I’m not at all sure what happened, but Aragorn’s proposed solution

seems to be what worked, but I think it needed a reboot to get everything to start to play nice together again. At least, that’s the story I’m going with.
I never ended up moving /home back to the SSD, and haven’t been around long enough to try anything else or screw anything else up.
So, problem solved, even if the ending is kind of anticlimactic.

2 Likes

This is a nice method, although I doubt any speed advantage, because:
first I had the system on a nvme, /home on a hdd.
Now I use a second nvme as /home
and installed backintime to back up the documents et al to an internal hdd.
The System runs as fast as before…

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.