Failed to start (after system and kernel updates)

On Thursday I upgraded my kernel from 5.10 to 6.4 but also installed 6.3 and 6.1. On Friday I got myself a FAILED error on boot which I did not note down, but then after a reboot the system started as normal.

Today I am getting the following:
[FAILED] Failed to start File Systemd(solid square)d-e12b-47f0-a7f1-9e246faf7f54.
[DEPEND] Dependency failed for /run/media/disk1.
[DEPEND] Dependency failed for Local File Systems.
You are in emergency mode. After logging in, type “journalctl -xb” to view system logs, “systemctl reboot” to reboot, “stystemctl default” or “exit” to boot into default mode.
Give root password for maintenance
(or press Control-D to continue):

I am typing my root password and then type journalctl -xb and start scrolling.

The first error I see says that it Failed to find module ‘v412loopback-dc’

Then after that I get a series of green blocks which for the most part contain the “successful” word on some variation of it, therefore I skip them.

Then I reach the next batch of errors reading:
kernel: nvidia-gpu 0000:04:00.3: i2c timeout error e00000000
kernel: ucsi_ccg 0-0008: i2c_transfer failed -110
kernel: ucsi_ccg -0008: ucsi_ccg_init failed - -110
kernel: ucsi_ccg: probe of 0-0008 failed with error -110

Then a few more successes and then it’s all errors. Reading line by line, I can see they are all Medium Errors, but there is a prompt that I run fsck MANUALLY, which I exited journalctl and did run. The output of that was:

fsck from util-linux 2.38.1
e2fsck 1.47.0 (5-Feb-2023)
/dev/nvme0n101 is mounted.
e2fsck: Cannot continue, aborting.

I have one NVMe installed directly on the motherboard. Then I have two SATA HDDs. I shut down my system and then opened the box. I unplugged the first disk, and booted. Manjaro boot was stuck on a black screen, no output whatsoever, no errors, no successes, no nothing. I powered off, plugged in that disk and disconnected the other one. Booted again and got myself the same errors.

I plugged everything in as they were in the beginning, and then I booted again. I got myself in the error and then the prompt to log in as root, which I did again. Then I attempted to type systemctl default, which did nothing. It just returned me to the same prompt, to try journalctl -xb or systemctl options, or exit.

exit prints
Reloading system manager configuration
Starting default.target

Then it returns to prompting to one of the four options.

What else should I try? What is most important is that I log into the computer, with or without the additional HDD. Then I would like to have the extra HDD, but I wouldn’t care much if I didn’t.

Edit: After unplugging both HDDs, rebooting, and allowing plenty of time to the black screen, I am getting the following output:
[ TIME ] Timed out waiting for device {solid square}{solid square}disk/by-uuid/5C9EF9589EF92B62.
[DEPEND] Dependency failed for /run/media/disk2.
[DEPEND] Dependency failed for Local File Systems.
[ TIME ] Timed out waiting for device /dev/dis/by-uuid/cbbd75cd-e12b-47f0-a7f1-9e246faf7f54.
[DEPEND] Dependency failed for /run/media/disk1.
[DEPEDND] Dependency failed for File System Check on /dev/disk/by-uuid/cbbd74cd-e12b-47f0-a7f1-9e246faf7f54.
You are in emergency mode. etc. etc.

Okay, because I needed to boot to my system fast, what I did was to edit the /etc/fstab and comment out the dependency for /run/media/disk1. After saving and rebooting my system with both HDDs physically plugged in, and after selecting the latest 6.4 kernel, I booted to my computer as normal, without any errors whatsoever.

Of course disk1 is not mounted at this point, and I have to manually mount it.

When I try to mount the disk I get an error that the mounting point does not exist. :face_with_monocle:

After I create the mounting point, and mount the disk, it all works as before. Of course with every reboot I would have to repeat the mounting process. So I needed to discover what would happen if I decided to reboot again. Would the mounting point remain intact, or would it be erased again? Without further ado, I rebooted the system, and the mounting point of /run/media/disk1 was not there.

I created the mounting point again, and then edited the /etc/fstab once again, uncommenting the problematic line. The unwanted behavior returned once again.

The current problematic line of fstab for this disk is
UUID={UUID} /run/media/disk1 ext4 defaults,noatime 0 2

Do I need to edit that some way? I had previously that setup for more than a year, with the 5.10 kernel, and I had never had any problems with it. This only happened after I updated the system and the kernel. And it happens with the kernel 5.10 when I try to boot using that also now.

Does the UUID match what is shown in, ex:

lsblk -o PATH,LABEL,FSTYPE,UUID

Yes, although I checked it by

ls -l /dev/disk/by-uuid

My understanding is that /run/ is a tmpfs that gets cleared at every reboot, so I would not expect your mount point to persist between reboots. I am not sure why that worked for you before, but I wonder if there is a more suitable mount point.

On the contrary. If there was a problem with /run that should be consistent with all mounting points under it. I also have a different disk with another UUID which gets mounted successfully on /run/media/disk2 with every reboot. This was the case also before the updates, and it hasn’t changed after that. It mounts without any problems whatsoever. I am only having this with that UUID and /run/media/disk1

Same problem here. After an update last night (manjaro XFCE, Kernel 6-1-38-1) I’m getting the same error (Dependency failed for Local File Systems). Disabling the two fstab entries worked. Since it happened after an update: has systemd changed the way it parses /etc/fstab? A manual mount works

Update

My fstab entry had an “owner” entry, which obviously is not valid anymore. Removing it solved the problem.

1 Like

My fstab file does not have an “owner” entry. My disk1 does still not load on boot automatically, and I have to mount it manually on every boot, which is a bummer.

/run was not designed for permanent mounts, whilst it usually works you may encounter issues.

Have you tried putting your mountpoints on a normal filesystem?

sudo mkdir -p /media/disk1

UUID=cbbd75cd-e12b-47f0-a7f1-9e246faf7f54 /media/disk1 ext4 noatime 0 2

There’s no reason to censor the UUIDs, they’re version 4.