Today, I was trying to remove some software that I had built from source, but for whatever reason sudo rm -rf * wasn’t working, so without thinking, I executed the forbidden command, sudo rm -rf /*. Once I saw that were errors that I couldn’t delete some files in /dev/, I immediately Ctrl-C’ed the command, and did SysRq R+E+S+I+U+O.
I’ve gone through some issues where the update got interrupted and the kernel wouldn’t work before, so I booted from a live USB and attempted to chroot into my root drive, but it couldn’t find /etc/resolv.conf, so I used dolphin to look into the drive, and to my horror, the entire /etc/ folder is GONE.
If ext4magic is not working then you might try testdisk/photorec.
BTW. Manual chrooting may be possible, ex:
Manual chroot
(Unnecessary if you have used manjaro-chroot) Mount the partitions using the designated temporary mountpoint and always start with root
root # mount /dev/sdyC /mnt
### With a BTRFS filesystem, you should note that the subvolumes must be mounted. That would be in such a case: root # mount -o subvol=@ /dev/sdyC /mnt ###
Then - if applicable - mount boot
root # mount /dev/sdyB /mnt/boot
Then - if applicable - mount efi
root # mount /dev/sdyA /mnt/boot/efi
Create the chroot environment and use bash as shell
root # manjaro-chroot /mnt /bin/bash
That probably would’ve worked if /bin/ and /boot/ weren’t bulldozed by the command. My apologizes for not mentioning that. Anyways, I installed testdisk and it says the file system is damaged whenever I try to open up the deleted sectors. Should I fsck?
Generally an fsck on a partition you are trying to recover files from is probably rather dangerous.
Though, of course … if the filesystem is so damaged you have no other choice … then …
Example like here: https://askubuntu.com/questions/841976/data-recovery-damaged-filesystem
(note the comments)
Also note that if you are serious about recovering you may want to try to make an image of the current disk … as any number of recovery operations could further degrade data you are trying to recover.
PS. Slightly OT, but these days doesnt rm need something like --no-preserve-root to be allowed to go ham on root?
It only treats / specially, to protect against mistakes like rm -rf / tmp/junk.
If you used /* then the shell expands that to /bin /boot ... before executing rm with those arguments, and rm -rf /bin or anything else that isn’t / is not treated specially.
You have to be more precise what exactly is missing. Only /etc? /boot? If everything important is still there, ie. /home, then I don’t see any point in doing any kind of recovery.
First you can try, booted in iso, reinstalling every package with: [Later note: Actually you would need to do some cleaning to only have last version of each package, eh]
pacman -r /mnt -U /mnt/var/cache/pacman/pkg/* # where you mount root to /mnt
I’m presuming here that you have all packages cached. Or you can do it by some other means. Also maybe a good idea would be to start with filesystem package.
After that you can try chrooting and running mkinitcpio and update-grub, or just reinstall kernels again.
Either way, in the end you can just reinstall everything and keep your home.
Boot a live media - preferably the same edition - mount your root partition on the temporary /mnt mountpoint e.g. sdy is an example - you will need to adjust for your system
sudo mount /dev/sdy2 /mnt
If your system is EFI then mount the efi partition
I’m not quite sure I understand. `/bin/, /boot/, /dev/, /etc/ and part of /home/ is completely bulldozed, so I cannot boot. Even if I did, bash is obliterated, so mount wouldn’t work.
I wish I was knowledgeable enough about Linux to select that option back then. Maybe next time I should know more about Linux before switching from using Raspbian occasionally to daily driving a “distro for intermediate users”. Oh well, actions have consequences.
Once we recover the missing files from /home/, that would be great, although my personal preference is not to reinstall, but if it comes to it, I don’t have a problem.
Glanced over the link, looks fine to me. Will do that to the ISO backup I made of the drive. (just going to mention @cscs because I haven’t previously stated that I made it.)
That got a chuckle out of me. I think I read somewhere about somebody in the programming community talking about the best and worst things of CLIs.
The best thing is they do exactly what you say, and the worst part is they do exactly what you say.
Could you (or anyone else) elaborate? Not quite sure what you mean, because I don’t think I need to mount /boot/ because it’s an MBR system. Furthermore, as previously stated, /boot/ is demolished, henceforth I cannot mount it.
Thanks for helping, everyone. Great community here in the forums, I must say.
The same principle applies, even though you’re not using a UEFI based system: Translated to an mbr system; rather than mounting the efi partition as @linux-aarhus suggested, the same methodology could be used to mount the partition on which Grub resides.
There was a time when /boot might have been symlinked to a separate boot partition, but this hasn’t been a default configuration for many years. If /boot is still intact, using the example used previously, the command to mount the partition would be:
sudo mount /dev/sdy1 /mnt/boot
(instead of this command to mount an efi partition):
It is, as a matter of fact. I’ve stated before in a previous topic that
The wording is a bit weird, but essentially when I tried to switch to UEFI using the Manjaro wiki guide (UEFI - Install Guide - Manjaro), in the BIOS only Windows Boot Manager (i had windows 10 installed previously) and network boot would show up in the UEFI boot sequence selector.