Set 04 10:42:28 rohan systemd-remount-fs[245]: /usr/bin/mount for / exited with exit status 1

Folks trying to move my manjaro install from an SSD to a NVME I’ve broken something within the partition references. Both systems do boot but every time I boot the old ssd disk im greeted with the following error:
set 04 10:42:28 rohan systemd-remount-fs[245]: /usr/bin/mount for / exited with exit status 1.Sep 04 10:42:28 rohan systemd[1]: Failed to start Remount Root and Kernel File Systems.

I understand this is some broken reference in some file. I have some alternatives as running grub again. The issue is I fear it would worsen the situation. I have a bad planned setup here.

sda           8:0    0 447,1G  0 disk 
└─sda1        8:1    0 447,1G  0 part 
sdb           8:16   0 447,1G  0 disk 
├─sdb1        8:17   0 342,1G  0 part 
└─sdb2        8:18   0 105,1G  0 part /mnt/dados_vcdoc
sdc           8:32   0 465,8G  0 disk 
├─sdc1        8:33   0   499M  0 part 
├─sdc2        8:34   0   100M  0 part /boot/efi
├─sdc3        8:35   0    16M  0 part 
├─sdc4        8:36   0 369,5G  0 part 
├─sdc5        8:37   0   618M  0 part 
├─sdc6        8:38   0   477M  0 part 
└─sdc7        8:39   0  94,6G  0 part 
sdd           8:48   0 447,1G  0 disk 
└─sdd1        8:49   0 447,1G  0 part /
nvme0n1     259:0    0 931,5G  0 disk 
├─nvme0n1p1 259:1    0   300M  0 part 
└─nvme0n1p2 259:2    0 931,2G  0 part

Where grub was left in sdc along with windows from the times I dual booted from a single machine. My first atempt was to use clonezilla to clone to the nvme but that did not work and the nvme was not even bootable. Please give me some insights of how to properly handle this situation.

That depends on how you moved it.

If you use dd then you will have copied uuids as well and that will give you headaches…

You cannot have duplicated uuids - so check for duplicates.

If you have duplicated uuids - you can use tune2fs to assign a new uuid to a partition - that is if your filesystem is ext4.

Another option is to use sgdisk

       -G, --randomize-guids
              Randomize the disk's  GUID  and  all  partitions'  unique
              GUIDs  (but  not  their  partition type code GUIDs). This
              function may be used after cloning a  disk  in  order  to
              render all GUIDs once again unique.

I had used clonezilla, at this point I just resintalled the whole nvme and imported the list of packages from my manjaro install. I still have some stuff to do like work certificates and so on. But what I would like at the moment is to get rid of this error on the current manjaro install on the ssd.

clonezilla is an advanced dd - so yes the partition uuids is your issue - if the disk is still present in the system.

Target the old system disk

sudo sgdisk -G /dev/sdx

is it safe?

Yes - check the manpage - quoted above - quoting the sentence to observe

This function may be used after cloning a disk in order to render all GUIDs once again unique.

Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.

Seems all I need to do is reboot now right?

or you could just run

sudo partprobe

vfbsilva@rohan ~ $ sudo partprobe -d /dev/sdd
and
sudo partprobe -d
Gave me no output is it normal?

-d does nothing - it is a --dry-run - so yes that is normal - you still need to run just the command - if you want output use -s

I know but i would expect some output. I just ran without the -d.
Now should I reboot?

no - not necessary

1 Like

Firstly thanks a lot for the walk-through. Now a related question, what would be the best way to do this clone install properly? The reason I want to do it is cause I have some A1 certificates and vpn configuration that I use at work that are annoying to replicate.

Clonezilla is fine - just remember that uuid or guid means Global/Universal Unique IDentifier.

So when writing a cloned image to a disk you duplicate UUID onto a disk in the same system and this will render UUID unusable as they are not unique anymore.

Therefore either remove the source disk or use sgdisk to reset uuids.

so theoretically speaking I could just clone again and use sgdisk one more time in this disk?

If you want the target disk to be immediately bootable - a move of OS - you should poweroff - detach the source - boot on the target - then attach the source and regenerate UUID on the source disk.

This precaution is to preserve the uuid in /etc/default/grub and in /etc/fstab - which you would otherwise have to change to be able to boot on the new target…

1 Like

sorry about my ignorance but assuming it is a sata disk what is the proper way to attach the source if the system is powered.

The correct method would be an usb enclosure or another usb powered connection.

Think of it as a diskbay - you can plug a disk in - it powers up and is connected - you eject it manually and disk dissappears from the system.

The following is the same thing.

Manjaro is using udev - therefore you can attach a sata disk on powered system just by connecting the data cable - I have done it on several occations - mostly because I have been lazy when testing disks for defects or when recycling a disk.

I assume no responsibility if you decide to do the same.

1 Like

I got the idea. Super thanks, I had tried to fix that for 3 days looking for help in irc and no one came close of explaining the issue to me so fast and providing me a working solution.

1 Like

@linux-aarhus one more question as the enclosure just arrived. I still have a new grub in the target nvme. Should I better erase the whole nvme, clone just the system partition and then install grub later?